Back to Projects

AI-enabled Imaging

A generative model of the diseased human brain

Project ID: 2019_037

Student: Virginia Fernandez

1st supervisor: Jorge Cardoso, King’s College London
2nd supervisor: Tom Vercauteren, King’s College London

The human brain is complex and hard to model. Neuroanatomically, the brain is comprised of many regions of interest. Each of these is responsible for certain neuroanatomical functions and/or biological processes. Neuroanatomists have spent a tremendous amount of time delineating and localising these regions of interest in heathy human brains, creating a repository of heathy human anatomy in the form of manually segmented atlases. These atlases, for example the Neuromorphometrics 35 atlas, are time consuming to create – it can take up to 1 month of human work to label a single brain from a high-resolution MRI image.

These atlases are then used by algorithms to learn the neuroanatomical location of key brain regions, and enable their automatically segmentation. Algorithms require large number of atlases, with sufficient variability to comprehensively describe the mapping between image intensity and tissue segmentation. Thus, the same time-consuming process needs to be repeated in very large numbers to enable algorithms to perform accurately. Unfortunately, these algorithms are severely hampered by the presence of pathology, as the learned neuroanatomical patter that predicts segmentations from images breaks down in the presence of abnormal tissues. In order for the algorithms to reliably cope with a new type of unseen pathology, human neuroanatomists are required to relabel a sufficient number of subjects with this pathology. Due to the need of lengthy human intervention, this process is not scalable across multiple pathologies; one would need to delineate a sufficient number of subjects for a wide range of pathologies so that algorithms can cope with such variability. Furthermore, manual segmentations are defined on images with certain contrasts, meaning that if the MRI machine is upgraded, or a new contrast is acquired, the same process of human relabelling needs to start all over again.  In short, human-defined labels are not a feasible, nor scalable, solution to such labelling problems.

Rather than asking humans to segment multiple images, with multiple contrasts, and with multiple pathologies, this project proposes to reformulate the problem by learning to generate plausible human brain anatomical models (in the form of segmentations) using generative adversarial networks – i.e. GANs will learn the distribution of healthy human anatomy from previously-created manual segmentation, and will be able to generate a multitude of segmentations that are anatomically correct. A semantic network will then be used to generate any structural MRI image contrast from the synthetic segmentations, providing an unlimited source of MRI/segmentation pairs that model the intrinsic variation of human anatomy. Afterwards, pathology will be introduced into both the GAN model and the semantic synthesis model, allowing the generation of a set of labelled human brains that cover a large amount of pathologies and phenotypes without the need for a human neuroanatomist to label any more data. By decoupling the generative model of human anatomy (segmentations) from the semantic model of image synthesis, this project will also provide a way to introduce robustness to scanner and sequence upgrades, by requiring only the retraining of the semantic network.

Such a model would be a key enabling technology that would allow the introduction and deployment of AI-enabled segmentation tools and imaging biomarkers into clinical care, where subject’s pathology is unknown and widely varying.

Back to Projects