Back to Projects

Image Computing and Computational Modelling (pre-2019)

Addressing the brain correspondence problem: predicting robust and accurate mappings between multi-modal feature sets through use of deep learning

Project ID: 2018_308

Student: Kyriaki Kaza

1st supervisor: Julia Schnabel, King’s College London
2nd supervisor: Emma Robinson, King’s College London
Clinical co-supervisor: David Edwards, King’s College London

Image registration is a cornerstone practice of medical image analysis as it allows multiple images, taken from the same organ, but different subjects, to be directly compared. The goal of image registration is to learn a spatially-constrained transformation, that maps equivalent structures in different images to a common location in a global coordinate space. In neuroimaging this is particularly challenging as brain surfaces (cortices) vary considerably in terms of their shape and functional organisation. An even greater challenge is to know what equates to a ‘good’ mapping, since mappings learnt separately for cortical shape or function tend to disagree (Nenning 2012, Glasser 2016).

In this project we propose to use advances in Deep Learning to derive generalisable descriptors of brain function and structure. These will be used to derive unique mappings between multimodal (functional and structural) markers of cortical organisation. In this way, it will be possible to improve the sensitivity of comparisons between brain imaging data sets, and thus improve our capacity for early diagnosis of complex neurological conditions such as Autism and Alzheimer’s disease.

The human brain is a vastly complex system. Therefore, it is common practice within neuroimaging studies to make simplifying assumptions that allow data to be compared across subjects. A common assumption is that, at a coarse scale, all brains follow a common pattern of organisation, and can be represented by a relatively small number of functionally specialised regions (Fig.1).

Figure 1 - description in the caption below.

Figure 1: The adult human brain surface composed of 360 functionally specialised regions (Glasser et al 2016). Different regions have different shading: motor regions are in green, visual regions in blue, auditory regions in shades of red and higher order regions  (recruited for complex tasks) in degrees from black/brown to white. This map is shown on an inflated model of the brain’s surface.

Importantly, almost all current studies assume that all brains have the same number of regions, and that these appear in the same relative position in all brains (Glasser 2016). This allows different brains to be straightforwardly compared by mapping all data to a global average space using spatially constrained deformations. Unfortunately, there is a growing body of evidence to suggest these assumptions are false. In fact brain organisation varies topologically, that is the location of regions relative to their neighbours differs across brains. As a result spatially (or topologically) constrained deformations are unable to accurately match regions between brains. This leads to a reduction in sensitivity of all brain imaging studies.

This project builds upon previous work (Robinson 2014,2017) conducted as part of the Human Connectome Project (HCP) that allowed us to map the organisation of the adult human cortex in unprecedented detail (Glasser 2016). These studies have shown that deriving mappings using multiple imaging modalities (reflecting brain shape, function and micro-structural organisation) can improve the accuracy of correspondences between brain imaging data sets. This is due to the fact that correspondences learnt between brain shapes do not agree with those learnt between patterns of brain functional activations (Glasser 2016, Nenning 2017).

Nevertheless, these methods still rely on assumptions of topological consistency that evidently do not hold. Recently, methods such as Kim (2012), Wang (2015) have proposed new approaches that address the ambiguities in matching different brain shapes by learning optimal deformations using machine-learning. Similar predictive approaches have been used to learn effective metrics for matching multimodal imaging data (Gutierrez-Becker, 2017) and make fast approximations of mathematically complex registration algorithms through Deep Learning (Yang, 2017).

This project proposes to extend these techniques to learn mappings between complex multi-modal cortical imaging feature sets derived from the HCP. Through Deep Learning it will be possible to derive generalisable descriptors of brain function and structure that can be used to derive unique mappings between multimodal markers of brain organisation. These mappings will provide the community with improved understanding of how the human brain is organised, and will increase the sensitivity of population imaging studies leading to better understanding of how differences in cortical organisation impact brain function and behaviour.

Literature:

1. Glasser MF, Coalson TS, Robinson EC, Hacker CD, Harwell J, Yacoub E, Ugurbil K, Andersson J, Beckmann CF, Jenkinson M, Smith SM. A multi-modal parcellation of human cerebral cortex. Nature. 2016 Jul 20.
2. Robinson EC, Jbabdi S, Glasser MF, Andersson J, Burgess GC, Harms MP, Smith SM, Van Essen DC, Jenkinson M. MSM: a new flexible framework for multimodal surface matching. Neuroimage. 2014;100:414-26.
3. Robinson EC, Garcia K, Glasser MF, Chen Z, Coalson TS, Makropoulos A, Bozek J, Wright R, Schuh A, Webster M, Hutter J. Multimodal Surface Matching with Higher-Order Smoothness Constraints. bioRxiv 2017:178962.
4. Nenning KH, Liu H, Ghosh SS, Sabuncu MR, Schwartz E, Langs G. Diffeomorphic functional brain surface alignment: Functional demons. NeuroImage. 2017 Apr 14.
5. Kim M, Wu G, Yap PT, Shen D. A general fast registration framework by learning deformation–appearance correlation. IEEE Transactions on Image Processing. 2012 Apr;21(4):1823-33.
6. Wang Q, Kim M, Shi Y, Wu G, Shen D, Alzheimer’s Disease Neuroimaging Initiative. Predict brain MR image registration via sparse learning of appearance and transformation. Medical image analysis. 2015 Feb 28;20(1):61-75.
7. Gutierrez-Becker B, Mateus D, Peter L, Navab N. Guiding multimodal registration with learned optimization updates. Medical Image Analysis. 2017 May 6.
Yang X, Kwitt R, Styner M, Niethammer M. Quicksilver: Fast predictive image registration–A deep learning approach. NeuroImage. 2017 Sep 1;158:378-96.

Back to Projects