Back to Projects

AI-enabled Imaging

Deep learning for diagnosis of congenital heart disease in fetus using MRI and ultrasound.

Project ID: 2020_019

1st Supervisor: Dr Maria Deprez, King’s College London
2nd Supervisor: Dr Andrew King, King’s College London
Clinical Champion: Kuberan Pushparajah, King’s College London and John Simpson, Evelina London

Aim of the PhD Project:

The aim of this project is to design a deep learning approach for diagnosis of congenital heart disease using fetal MRI and ultrasound, which includes visualisation of cardiac anatomy from motion corrected MRI, alignment of ultrasound of moving heart and blood flow, and interpretable deep learning for prediction post-natal outcomes.

Project Description / Background:

Congenital Heart Disease (CHD) is the most common congenital malformation, affecting 8 out of a thousand births. Up to 25% of babies with CHD have a major abnormality [1] and delayed diagnosis is associated with increased mortality [2], [3]. The standard clinical modality for prenatal diagnosis is ultrasound, however in many cases the diagnosis may be incomplete, due to difficulties in visualising the structure and topology of the major vessels [4] caused by artefacts and variable ultrasound image quality. Recently we have shown that motion corrected fetal MRI [5] improved diagnostic quality in 90% of the cases [6]. However, the motion corrected fetal cardiac MRI has a number of disadvantages. Firstly, the MRI has to be reconstructed from stack of slices corrupted by interslice motion and major motion, common at around 20 weeks pregnancy when the exam is required, results in motion correction algorithm failures. For this reason, the exam is currently postponed until 30 weeks of pregnancy. Secondly, the current pipeline requires time-consuming manual input, such as manual reorientation and segmentation, that prevents the translation to routine clinical practice. Finally, even if motion-corrected 3D MRI is good quality, it has low spatial resolution that prevents visualisation of anatomical details such as valves and does not capture movement of the heart.  Ultrasound, on the other hand, though poor for visualisation of 3D topology, offers higher spatial and temporal resolution as well as measurement of the blood flow in fetal heart and vessels. We hypothesise that combining anatomical and functional multimodal markers into the common reference frame will open possibilities enhanced and more accurate diagnosis and prediction of outcomes for fetuses with CHD.

The aim of this project is to design a deep learning approach for diagnosis of CHD at mid-pregnancy that combines advantages of ultrasound and MRI. The project will progress in four stages:

  1. Development of deep learning-based motion correction algorithm that allows for fully automatic and reliable reconstruction of fetal MRI at around 20 weeks of pregnancy
  2. Development of automatic segmentation and 3D visualisation of fetal heart in motion corrected MRI
  3. Registration of fetal US data with the motion corrected MRI to facilitate qualitative diagnosis by joint assessment of fetal heart anatomy, motion and blood flow
  4. Interpretable machine learning to discover new complex biomarkers of abnormalities that are currently difficult to diagnose prior to birth, such as coarctation of the aorta, and prediction of post-natal outcomes in these babies.

This project is suitable for a candidate with background in Computer Science, Engineering or other technical discipline, who is interested to develop both theoretical and practical skills in deep learning and image analysis with strong focus on clinical application and impact in healthcare.


  1. L. D. Botto, A. Correa, and J. D. Erickson, “Racial and Temporal Variations in the Prevalence of Heart Defects,” Pediatrics, 2004.
  2. K. L. Brown, D. A. Ridout, A. Hoskote, L. Verhulst, M. Ricci, and C. Bull, “Delayed diagnosis of congenital heart disease worsens preoperative condition and outcome of surgery in neonates,” Heart, 2006.
  3. M. L. Mazwi et al., “Unplanned reinterventions are associated with postoperative mortality in neonates with critical congenital heart disease,” J. Thorac. Cardiovasc. Surg., 2013.
  4. M. Bensemlali et al., “Discordances Between Pre-Natal and Post-Natal Diagnoses of Congenital Heart Diseases and Impact on Care Strategies,” J. Am. Coll. Cardiol., 2016.
  5. M. Kuklisova-Murgasova, G. Quaghebeur, M. A. Rutherford, J. V. Hajnal, and J. A. Schnabel, “Reconstruction of fetal brain MRI with intensity matching and complete outlier removal,” Med. Image Anal., vol. 16, no. 8, 2012.
  6. D. F. A. Lloyd et al., “Three-dimensional visualisation of the fetal heart using prenatal MRI with motion corrected slice-volume registration.,” Lancet, vol. Accepted., 2018.
  7. S. S. M. Salehi et al., “Real-time automatic fetal brain extraction in fetal MRI by deep learning,” in Proceedings – International Symposium on Biomedical Imaging, 2018.
  8. B. Hou et al., “3-D Reconstruction in Canonical Co-Ordinate Space from Arbitrarily Oriented 2-D Images,” IEEE Trans. Med. Imaging, 2018.
  9. R. Wright et al., “LSTM spatial co-transformer networks for registration of 3D fetal US and MR brain images,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018.
  10. A. Uus, T. Zhang, L. Jackson, M. Rutherford, J. V. Hajnal, and M. Deprez, “Deformable Slice-to-Volume Registration for Motion Correction in Fetal Body MRI,” Jun. 2019.
  11. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Med. Image Comput. Comput. Interv. — MICCAI 2015, pp. 234–241, 2015.
  12. J. Schlemper et al., “Attention gated networks: Learning to leverage salient regions in medical images,” Med. Image Anal., vol. 53, pp. 197–207, Apr. 2019.
  13. S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” in Advances in Neural Information Processing Systems, 2017.
  14. O. Oktay et al., “Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation,” IEEE Trans. Med. Imaging, 2018.
  15. O. Oktay et al., “Structured decision forests for multi-modal ultrasound image registration,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015.
  16. S. S. Mohseni Salehi, S. Khan, D. Erdogmus, and A. Gholipour, “Real-Time Deep Pose Estimation With Geodesic Loss for Image-to-Template Rigid Registration,” IEEE Trans. Med. Imaging, 2019.
  17. I. Grigorescu, L. Cordero-grande, A. D. Edwards, J. Hajnal, and M. Deprez, “Interpretable Convolutional Neural Networks for Preterm Birth Classification,” Proc. Mach. Learn. Res., 2019.

Back to Projects