Back to Projects

AI-enabled Imaging, Emerging Imaging

Synergistic joint variational neural networks for PET-MR image reconstruction with generative modelling priors

Project ID: 2019_019

Student: Guillaume Corda

1st supervisor: Andrew Reader, King’s College London
2nd supervisor: Julia Schnabel, King’s College London

Without question, artificial intelligence (AI) is having a massive impact on many fields of science and technology, and this includes medical imaging. Key advantages of AI for medical imaging include the potential to optimise processing and image reconstruction methodologies with training data, whereby the actually desired outputs, for given example acquired datasets, are fully specified as the goal. However, the majority of work so far has been for post-processing of medical images, such as for 3D positron emission tomography (PET) and magnetic resonance imaging (MRI), and thus far this has not fully tackled the problem of direct use of raw medical imaging data for reconstruction of 3D images. However, there have been some initial steps: machine learning has been applied to 2D MRI reconstruction from raw k-space data and to 2D sinograms for PET (e.g. Zhu et al 2018) [1]. Nonetheless, full end-to-end creation and training of an artificial neural network (ANN) is not yet practical for fully 3D PET and MRI reconstruction. For example, for fully 3D PET reconstruction, the ANN would need an input of >500 MB of raw fully 3D sinogram data from a single data acquisition (not even considering the case of dynamic PET data), and would need to output a 3D reconstruction of size >50 MB. Given that image reconstruction via an ANN from raw data ideally needs to have at least one fully connected layer, this places a prohibitive constraint on feasibility for fully 3D PET image reconstruction, as the fully connected layer would need millions of TB of storage (in contrast to needing approximately only several hundreds of MB for a fully connected layer for 2D MRI reconstruction from 2D k-space). Of course, sparsity constraints could be applied, but for PET imaging an accurate imaging model is not truly sparse, and approximations would be necessary. Furthermore, another disadvantage of such an immense end-to-end fully-connected ANN is the need for an extremely large training corpus, in order to learn what in fact is already largely very well known (e.g. the basic form of the physics of the medical imaging acquisition process, for both PET and MRI, as captured in the imaging system models).
This project therefore seeks to perform fully 3D PET and MR image reconstruction by designing joint ANNs, one for PET and one for MRI, each of which are based on unrolling existing regularised iterative image reconstruction methods, dubbed “variational neural networks” [2]. The project would perform this for both PET and MR reconstruction, and importantly furthermore interconnect the two networks to deliver synergistic potential, whereby the networks can be trained to make use of as much, or as little, of the other modality as is necessary for best prediction of the desired output reconstructed images. Applying VNNs to PET reconstruction is entirely novel, and furthermore, linking two VNNs, as well as seeking synergistic benefit by so doing, is also highly innovative.

In addition to this novel synergistic joint VNN, generative modelling priors will also be used for both the PET and MR VNNs. It is anticipated that the outcome will be unprecedented levels of synergistic image quality improvements for both PET and MRI. These image quality benefits, while useful in their own right, can furthermore have potential to permit faster, cheaper and safer PET-MR imaging, by shortening scan times, increasing patient throughput and also enabling lower amounts of radioactivity to be injected for standard levels of image quality.


[1] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, pp. 487-492, Mar 21, 2018.
[2] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, and F. Knoll, “Learning a variational network for reconstruction of accelerated MRI data,” Magn Reson Med, vol. 79, no. 6, pp. 3055-3071, Jun, 2018.
[3] A. R. Depierro, “A Modified Expectation Maximization Algorithm for Penalized Likelihood Estimation in Emission Tomography,” IEEE Transactions on Medical Imaging, vol. 14, no. 1, pp. 132-137, Mar, 1995.
[4] A. J. Reader, and J. Verhaeghe, “4D image reconstruction for emission tomography,” Phys Med Biol, 2014.
[5] A. Mehranian, M. A. Belzunce, C. J. McGinnity, A. Bustin, C. Prieto, A. Hammers, and A. J. Reader, “Multi-modal synergistic PET and MR reconstruction using mutually weighted quadratic priors,” Magn Reson Med, Oct 16, 2018.

Figure 1. Description in the caption below.

Figure 1: Illustrative schematic of one of the VNNs, exploiting a deep generative modelling prior for each iterative update (each layer of the deep ANN), receiving inputs from the MRI VNN to provide synergistically improved reconstructions.

Figure 2. Description in the caption below.

Figure 2: A brief chronology of improvements in PET image reconstruction in recent decades, for which this proposed project should provide then next step forward for PET reconstruction.

Back to Projects