Back to Projects

AI-enabled Imaging

Fully Bayesian 3D PET-MR Neuroimaging Reconstruction

Project ID: 2023_040

1st Supervisor: Prof Andrew Reader, King’s College London
2nd Supervisor: Dr Andrew King, King’s College London
Industrial Supervisor: Paul Galette, GSK
Clinical Supervisor: Prof Alexander Hammers, King’s College London

 

Aims of the PhD Project

  • Develop a PET-MR latent-space generative modelling methodology for brain PET
  • Provide uncertainty images with the reconstructions
  • Reduce noise and improve spatial resolution of brain PET images, to potentially lower injected radiation doses or reduce scan time

 

Lay Summary

Positron emission tomography (PET) is in widespread use for imaging cancer, and diseases of the heart and brain. This project concerns the case of brain imaging with a simultaneous PET-MR scanner, with potential applications in both research and clinical imaging. Brain PET imaging can be limited by noisy data and by relatively low spatial resolution, depending on the amount of radioactivity administered and the radiotracer being used.

This project will use AI methodologies to make best use of additional information to help improve image quality, such as that from the simultaneously acquired MRI. However, at present, there is no routine way of expressing how confident we are in the images that are reconstructed from the collected scanner data. This matters, as these images inform both research findings as well as clinical decision making, and with the advent of AI reconstruction methods the need for uncertainty in the reconstructed image quality is greater than ever.

This project will use the very latest in deep learned generative modelling methodologies and place them directly into the image formation process for PET, thus allowing ensembles of reconstructed images to be generated. Furthermore, data from MRI will be used to provide even richer information for these image models. This will allow improved image quality, which while beneficial in its own right, can in turn potentially be used to reduce radiation doses, shorten scan times (reducing impact of motion, increasing patient comfort and throughput), or even to reduce the numbers of subjects needed to establish a research hypothesis.

 

Project Description/background

Deep learning PET image reconstruction [1] is beginning to reap the benefits of combining existing physical and statistical models with the learning paradigm of AI, delivering new levels of image quality. This project will build on this progress [2] to advance methodology and robustness for research and potential clinical use. The focus will be on exploitation of MR imaging information and use of deep generative models, such as modified variational autoencoders (VAEs) (e.g. vector quantised (VQ), and adversarial versions (VQ-GAN [3])). The VAE methodology, accounting for both MR and PET information, will be embedded within the PET image reconstruction. These will allow estimates of uncertainty in every reconstructed PET image, through generating multiple reconstructions by sampling the latent probability density function.

The project will be reliant on supervised deep learning, and will initially consider relatively low-complexity machine-learned MR-assisted regularisation techniques for PET image reconstruction. Such approaches will not demand large quantities of training data and which, due to their limited complexity (low-specificity inductive prior), will enhance the likelihood of broader acceptance through reduced chance of reproducing misleading information [4] from a training set. Moving on in regularisation complexity, the project will assess methods for assisting PET reconstruction by deep-learned processing of multi-modal complementary side information, such as MRI, through self and inter-modal attention. This will involve use of state-of-the-art deep learned architectures, including transformer-related architectures [5] with self or inter-modal attention for efficient compressed learning of the latent space probability density function. For any given architecture / inductive prior and quantity of training data, performance relative to other architectures for the same number of trainable parameters will be assessed, along with test-time performance and robustness to out of distribution data. Through learning a latent space distribution, the methodology will be implicitly Bayesian [6] to account for uncertainties. However, alternative approaches such as deep ensembles will be explored.

Whilst multi-modal imaging data has exploitable commonalities, there are important differences between modalities. Strategies for handling mismatch information will be researched, such as multi-head learned inter-modal attention, which may enable the benefits of common information while limiting cross talk between modalities. Again, to assist in reliability, delivery of estimates of uncertainty in the reconstructed images will be considered. A goal will also be to develop a generalised latent space representation for multi-modal neuroimaging, learning a generative model for PET-MR data, which can be conditioned on data from a given scan to deliver the latent posterior distribution from which multiple reconstructions can be generated.

Data for the project will be from simultaneous PET-MR brain imaging studies carried out at St Thomas’ hospital, led by researchers from the Institute of Psychiatry, Psychology & Neuroscience. Approximately 100 datasets for scans related to various and diverse neurological conditions (such as Alzheimer’s disease and sleep apnea), will be available for training and testing the advances.

 

 

 

  • [1] Reader, A.J., et al., Deep Learning for PET Image Reconstruction. IEEE TRPMS 2020.
  • [2] Mehranian, A, and Reader, A.J., Model-Based Deep Learning PET Image Reconstruction Using FBSEM, IEEE TRPMS 2020.
  • [3] Esser et al “Taming Transformers for High-Resolution Image Synthesis” CVPR 2021
  • [4] Antun, V. et al., On instabilities of deep learning in image reconstruction and the potential costs of AI, PNAS, 2020.
  • [5] Vaswani, A. et al., Attention is all you need , NIPS 2017
  • [6] Abdar, M. et al., A review of uncertainty quantification in deep learning: Techniques, applications and challenges, Information Fusion, 2021

Back to Projects