Student: Robin Andlauer
1st supervisor: Andrew King, King’s College London
2nd supervisor: Aldo Rinaldi, King’s College London
Additional Supervisor: Bernhard Kainz, Imperial College London
This project is co-funded by the King’s BHF Centre of Research Excellence, Data Science and Electronic Health Initiative
Aim of the PhD Project:
- Heart disease is number one killer worldwide.
- AI models can automatically diagnose disease, but they lack explanatory power.
- Project aims to develop AI tool for diagnosis and treatment planning in cardiology that can explain its decisions to cardiologists.
Project Description / Background:
The use of artificial intelligence (AI), and specifically deep learning, for diagnosis and treatment planning in cardiology is an active research area [1,2]. However, whilst deep learning techniques have produced impressive results, a significant problem remains. Often, the techniques that produce the most accurate results lack one important feature that is important for the clinical acceptance of new technology: explanatory power. Put simply, most deep learning models are able to make predictions but are not able to explain in human-interpretable terms how the prediction was arrived at. Without such explanations, in many applications clinicians will be reluctant to base clinical decisions upon recommendations from such “black-box” models.
In this project we focus on deep learning models that take images (and possibly other clinical data) as input. Producing explanations from image-based deep learning models is a significant challenge. In the literature, most attempts at such “interpretable machine learning” or “explainable AI” have focused on one of two approaches: (1) try to visualise the inside of the “black box”, e.g. by using “saliency maps” which show areas of the input image that were important in making the decision, (2) train a simpler model which may be more interpretable to some degree. Both of these approaches are likely to be inadequate in many medical applications. For example, in cardiology, which is our focus in this project, an “explanation” that will be acceptable to a cardiologist is likely to require information about pathological processes and/or concepts such as tissue properties and electrical/mechanical activation patterns.
A key challenge will be to find ways of linking the model’s automated decision with “higher level” human-interpretable concepts. In this project, we will investigate ways of making these links in the application of patient diagnosis, stratification and treatment planning in heart failure. One promising area that has recently emerged from the computer vision literature is the investigation of ways of querying the importance of human-interpretable concepts to deep learning models [3]. We have recently started to apply these methods in cardiology with highly promising initial results [4]. Other interesting avenues for exploration include methods that incorporate explanations into the training objective [5], as well as ways of putting humans (i.e. clinicians) “in the loop” of the training of deep learning models [6]. This type of approach could be used to encourage the deep learning model to learn features that are clinically meaningful, effectively creating a dialogue between clinicians and deep learning models.
There are many intriguing directions to explore in this field which are relatively untouched in the medical domain, and the potential for novelty is high. Our ultimate aim is to produce a computer aided decision-support tool to assist cardiologists in stratifying patients with heart failure and planning its treatment. The tool would act like a “trusted colleague” or “second reader” that the cardiologist could engage with to find their opinion about difficult cases as well as the reasoning behind this opinion. This is a highly ambitious aim and this project represents the first part of this journey, but if successful the impact could be great.
Applicants for this position are expected to have an interest in deep learning. Prior experience in deep learning is not essential but good programming skills are required.
References:
- Litjens et al, A Survey on Deep Learning in Medical Image Analysis, Medical Image Analysis, 2017.
- Zhang et al. Deep Learning for Diagnosis of Chronic Myocardial Infarction on Nonenhanced Cardiac Cine MRI, Radiology, 2019.
- Kim et al, Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV), Proc ICML, 2018.
- Clough et al, Global and Local Interpretability for Cardiac MRI Classification, Proceedings MICCAI, 2019.
- Codella et al, TED: Teaching AI to Explain its Decisions, Arxiv, 2018.
- Lage et al, Human-in-the-Loop Interpretability Prior, Proc NIPS, 2018.