Back to Projects

AI-enabled Imaging

Building sensitive models of cognition using interpretable Deep Learning

Project ID: 2019_014

Student: Mariana Ferreira Teixeira Da Silva

1st supervisor: Emma Robinson, King’s College London
2nd supervisor: Jorge Cardoso, King’s College London

Of all areas of the brain, the cerebral cortex is the most advanced in humans relative to non-human primates. The much larger, and more convoluted surface makes room for neuronal pathways responsible for higher order cognitive processes. Neuroscientists wish to compare these pathways between humans to better understand the mechanisms underpinning complex aspects of cognition, behaviour and neurological disease. Unfortunately, due to the complexity of surface folding, and the degree of variation between individuals this is far from straightforward.

Traditional approaches for comparing brain imaging data make the assumption that, at a coarse scale cortical folding patterns are consistent between individuals, and that the neuronal pathways, responsible for behaviour map to the same areas of each fold. This allows direct comparison of data sets via mapping of all data to a global average space1.

Recent research strongly suggests this not to be the case2,3. On the contrary, folding patterns vary widely and do not map strongly to the location of neuronal pathways. In addition, patterns of functional activity (known to correlate with behaviour) vary in where they are located across the brain’s surface 2. This means that current analyses are smoothing away biologically relevant aspects of neurological variation and muddying evidence of the origins of behavioural and cognitive diversity3.

By contrast, Deep Learning approaches have the potential to significantly improve the sensitivity of neuroimaging studies by compressing whole images into latent feature representations4. These enables sensitive comparison of data without any requirement for prior modelling of the signal, nor mapping spatial correspondences between image sets. Such models have generated significant improvements in the accuracy of image classification and segmentation tasks for both natural and medical images.

Unfortunately, a frequent criticism of Deep Learning is that it is a ‘black-box;’ in other words, it is unclear how the algorithms generate the latent representations, nor what drives their decision making. This is problematic for medical image analysis since, if these algorithms are ever to transition to use in the clinic, doctors would need to trust that the decisions being made will always be correct. Moreover, for the analysis of brain imaging data relative to behaviour and cognition, more important than simple classification is the knowledge of the neural mechanisms underpinning this decision.

This project will therefore build on recent advances in the development of methods for network visualisation and interpretation ( to generate Deep Learning models that derive meaningful representations from multi-modality cortical neuroimaging data. Using the fact that recent research has shown that the most natural representation for data from the convoluted cortex is indeed a surface mesh, the methods will be developed through use of graph Convolutions5. In this way, the project will develop new methods that will significantly enhance understanding of cognition, improving our sensitivity to detect the neural origins of behaviour and mechanisms of complex neurological diseases.

1 Robinson EC, Garcia K, Glasser MF, Chen Z, Coalson TS, Makropoulos A, Bozek J, Wright R, Schuh A, Webster M, Hutter J. Multimodal surface matching with higher-order smoothness constraints. Neuroimage. 2018 Feb 15;167:453-65.
2 Glasser MF, Coalson TS, Robinson EC, Hacker CD, Harwell J, Yacoub E, Ugurbil K, Andersson J, Beckmann CF, Jenkinson M, Smith SM. A multi-modal 3 parcellation of human cerebral cortex. Nature. 2016 Aug 11;536(7615):171-8.
3 Bijsterbosch JD, Woolrich MW, Glasser MF, Robinson EC, Beckmann CF, Van Essen DC, Harrison SJ, Smith SM. The relationship between spatial configuration and functional connectivity of brain regions. Elife. 2018 Feb 16;7:e32992.
4 Zeiler, Matthew D., and Rob Fergus. “Visualizing and understanding convolutional networks.” European conference on computer vision. Springer, Cham, 2014.
5 Bronstein, Michael M., et al. “Geometric deep learning: going beyond euclidean data.” IEEE Signal Processing Magazine34.4 (2017): 18-42.

Back to Projects