Back to Projects

AI-enabled Imaging

First, do no harm: Developing fair AI techniques for medical imaging

Project ID: 2023_028

1st Supervisor: Dr Andrew King, King’s College London
2nd Supervisor: Dr Claudia Prieto, King’s College London
Clinical Supervisor: Dr Bram Ruijsink, King’s College London


Aims of the Project:

Investigate medical imaging-based AI systems for potential bias, for example by race or sex; Develop notions of fairness for AI that are appropriate in the medical context; Develop novel techniques for ensuring fairness in medical imaging to help AI become a tool to address existing healthcare inequalities


Lay Summary:

In recent years there has been a surge in interest in the potential of AI systems to exhibit bias based on demographic factors such as sex and race. Most of this interest has focused on computer vision applications, such as facial recognition from video image data, and there have been some high-profile examples in which AI has been shown to discriminate against minority groups due to lack of representation of those groups in the data used to train the AI model (e.g. see

In AI for medical imaging there has been much less interest in this important area, even though many health datasets feature under-representation of women and/or non-white races and there is a huge potential impact of biased AI systems in medicine. AI is starting to be translated into clinical practice in some medical applications, and if it does not perform equally well for all demographic groups this could lead to the maintaining or even exacerbation of existing inequalities in healthcare systems. A small number of recent papers have highlighted bias in medical AI, for example in X-ray based classification and cardiac MR segmentation. This raises an important question: how should these biases be addressed?

In computer vision the focus has been on “debiasing” AI models to make them more “fair”, e.g. a facial recognition AI system should perform equally well for all demographic groups, even if this means slightly lower performance overall (i.e. the “accuracy-fairness trade-off”). But in medicine the situation is very different. For example, in facial recognition we obviously do not know anything about the subject before they are recognised. However, in medical imaging-based diagnosis, for example, we know the patient’s sex, race and clinical history, and doctors routinely make use of this information when diagnosing disease. Furthermore, developing a less biased model may lead to knowingly worse performance for some demographic groups compared to using the biased model, violating the historical maxim of “first, do no harm”. It is clear that notions of “fairness” that have emerged from the computer vision literature will not be universally applicable in medicine.

In this project we aim to develop fair AI tools for a range of medical imaging-based pipelines. This will involve first of all developing a new set of objectives for fair AI that are appropriate for a medical context. Underpinning these objectives will be the desire to “do no harm”, but to still develop tools that can address biases in naively trained AI systems. We emphasise that we do not see AI as a potential source of unfairness in global healthcare systems, as many do, but rather we see fair AI as a potential weapon to tackle existing inequalities in these systems.

The project will involve investigating a number of different medical imaging applications, with a main focus on magnetic resonance imaging (MRI). Likely applications will include automated diagnosis from cardiac MRI and breast MRI data. We will also investigate bias in the earlier stages of MRI pipelines such as image reconstruction and segmentation.

Figure illustrating the project.

Back to Projects