Back to Projects

AI-enabled Imaging, Affordable Imaging

Foundation models for Ultrasound Imaging with applications in prenatal congenital heart disease

Project ID: 2023_014

1st Supervisor: Dr Bernhard Kainz, Imperial College London
2nd Supervisor: Prof Jo V. Hajnal, King’s College London
Clinical Supervisor: Dr Thomas Day, King’s College London

 

Aims of the Project:

The central hypothesis is that we can build ML foundation models using self-supervised learning without the need for manual annotations from several thousand available ultrasound videos, consisting of a billion individual frames. These can be specialised for a better understanding of disease and for deriving highly specific disease screening models.

 

Lay Summary:

Machine Learning (ML) is undergoing a paradigm shift with the rise of deep neural network (DNN) models trained on large amounts of unlabelled data, commonly using self-supervision at scale. These models can be adapted to a wide range of downstream tasks and have been called foundation models to emphasise their important, yet incomplete nature. So far, foundation models are only found in areas like natural language processing (e.g., GPT-3) or image synthesis (e.g., DALL-E and stable diffusion). To date, their development is limited to either timeseries representation or appearance modelling. In this project we will introduce for the first-time foundation models that allow to be specialised for complex medical video analysis tasks like medical ultrasound imaging.

A specific and very important example where such models are relevant is the diagnosis of congenital heart disease (CHD) with ultrasound imaging during high-throughput health screening like the UK NHS Fetal Anomaly Screening Programme (FASP).

CHD is the most common group of fetal malformations, occurring in roughly 1% of pregnancies, and is also the most common cause of neonatal malformation deaths. Antenatal detection of CHD has been shown to result in improved postnatal outcomes (both in terms of mortality and long-term neurological development), allows parents time to consider options regarding continuation of pregnancy, and may allow therapeutic intervention in utero in selected cases. However, universal antenatal detection of CHD has not been achieved. Ultrasound views of the fetal heart are part of the screening scan offered to all pregnant women in the UK, but currently only around 50.3% of cases of severe CHD in infants have been diagnosed before birth.

One reason for this poor characterisation ability at the front-line-of-care is the lack of tools for the automated analysis of spatio-temporal ultrasound video stream data, which would provide highly specific expertise and confidence in decision making.
There is a growing body of evidence that suggests that machine learning methods can support this task, with current ML methods entering clinical trials and early products for taking measurements automatically entering the market, e.g., the SonoLyst tool from GE. However, current methods are limited in their ability to capture the complex and varied nature of disease and suffer from low specificity, which is detrimental for healthcare screening applications — too many false positives would overwhelm tertiary referral clinics and cause unnecessary distress to parents. These methods are also not efficient enough yet to be deployed on mobile ultrasound devices. ML on mobile devices would be crucial since they are expected to provide an affordable solution for prenatal health screening in less developed regions.

This project will address these limitations by developing a) new ML methods that leverage the vast amount of data we have (>20k fully recorded prenatal patient ultrasound examination videos and >10k labelled echocardiogram videos) with foundational ML models in a self-supervised way, i.e., without the need for detailed manual ground truth annotations and b) provide efficient tools for characterising and detection of CHD with specialised models that base on our general ultrasound foundational models.

 

Back to Projects