Student: Margarita Bintsi
1st supervisor: Daniel Rueckert, Imperial College London
2nd supervisor: Alexander Hammers, King’s College London
The aims of this project are three-fold:
- Develop a machine learning (ML) classifier based on deep learning approaches, e.g. convolutional neural networks (CNNs), that can:
- detect early patterns of neurodegeneration; and
- differentiate between different patterns of neurodegeneration (corresponding to different forms of dementia)
- Develop different approaches to make the ML model interpretable. Possible approaches for this will include the use of visualisation techniques for CNNs as well as models that can generate semantic representations (text) from images.
- Evaluate these approaches in the context of automated decision support for dementia diagnosis.
Dramatic success in machine learning (ML) has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will be able to perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users. This is particularly important in the context of AI applications in healthcare.
The purpose of this project is to develop an ML-based decision support tool for brain Magnetic Resonance Images (MRI) with application to neurological disorders. A particular focus will be on the development of a decision support tool that will not only be able to diagnose patients based on clinical imaging and non-imaging information, but will also be able explain how it has reached its decision. For this, we will use a large database of multi-modal brain MRI from patients with different forms of dementia.
In this project, we will address this research challenge at two different levels. At the low-level where machine learning approaches – such as deep learning based on convolutional neural networks (CNNs) – are used to extract quantitative information from MR images, we will use techniques for visualisation of neural network activations and learnt filters to develop visualisations that make the output the neural networks interpretable.
At a higher level where machine learning is used for diagnosis we will combine deep learning approaches with symbolic representations to produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user. Through close collaboration with clinicians we will explore the trade-off between performance and explainability / interpretability.