Back to Projects

AI-enabled Imaging

Synthetic Humans: A joint whole-body generative diffusion-based model of human anatomy and disease

Project ID: 2023_048

1st Supervisor – Dr M. Jorge Cardos, King’s College London
2nd Supervisor – Dr Marc Modat, King’s College London
Clinical Supervisor – Prof James Teo, King’s College Hospital NHS Foundation Trust

 

Aims of the Project

To create a large multimodal collection of open-source datasets aligned to a new standardised human template model

  • To create a 3D generative model of the whole human body, given a set of input conditioning variables (age, sex, and anatomical region/coordinates)
  • Extend the above models to multiple modalities/sequences (generate any body part in any given sequence of choice), and allowing for sequence to sequence conditioning (given one sequence, generate another sequence)
  • To demonstrate the model’s applicability in two pretext clinical endpoints (brain, heart).

 

Lay Summary

In computer vision, the rapid progress of deep learning methods was underpinned by very large datasets with millions of examples. Current medical imaging datasets pale in comparison, with the largest ever available dataset being only 40 thousand samples big. This limited data, combined with the fact that medical images are 3D means that downstream models cannot capture the full anatomical, pathological and signal variability. This limits the models that can be trained, their accuracy, and make them biased and unfair. The creation of an AI model that can generate synthetic images from any organ, modality and key pathologies would transform the field.

Current state-of-the-art generative models of medical images, developed by the KCL AMIGO team, have demonstrated that one can generate anatomically correct synthetic 3D MRIs of the human brain [Tudosiu et al ArXiv 2022, Lopez Pinaya et al. ArXiv 2022] for a given age or sex. These models are however restricted to a single organ and modality, and do not allow the combination of multiple datasets. This project will significantly extend and transform our current state-of-the-art models to allow the generation of any organ, modality or pathology from a single joint model. Finally, the candidate will show that these images can be used seamlessly in downstream studies and for AI training, with a particular focus on dementia image analysis classification, and the joint analysis of cardiac and neurological data.

Candidate Background: The candidate should have a background in one of the following: biomedical engineering, applied mathematics/physics or computer science. The candidate should also have a keen interest in advanced Artificial Intelligence algorithms, and modelling of high-dimensional system.

 

Back to Projects