Back to Projects

AI-enabled Imaging

Synergistic Representation Learning for Pancreatic Image Analysis

Project ID: 2021_009

Student: Kate Cevora

1st Supervisor: Wenjia Bai, Imperial College London
2nd Supervisor: Ben Glocker, Imperial College London
Clinical Champion: Andrea Rockall, Imperial College London

Aim of the PhD Project:

  • In this project, we aim to develop novel machine learning approaches for segmentation and analysis of pancreatic images.
  • The project will enable robust and accurate characterisation of pancreatic volumes and shapes, providing quantitative imaging phenotypes for assessment of pancreatic anatomy and identification of pathological features.

Project Description / Background:

Pancreatic cancer is the 6th most common cause of cancer deaths in the UK with a 5-year survival rate of only 5% [1]. However, if pancreatic cancer is diagnosed at an early stage when surgery is possible, the survival rate can go up to 20% [1], [2]. Early diagnosis of pancreatic cancer is challenging, mainly because symptoms only occur at a late stage and screening tools are still lacking. In this project, we investigate novel machine learning approaches for automated segmentation and analysis of pancreatic anatomy from medical images. It will provide an efficient tool for extracting quantitative image-based biomarkers and assisting clinicians in diagnosis and assessment of pancreatic diseases.

A number of methods have been proposed for pancreatic image segmentation in recent years. Some are atlas-based, relying on image registration for atlas propagation and then performing label fusion to create segmentation [3]. A disadvantage with atlas-based methods is that they are computationally expensive due to the cost of multiple image registrations. Most recent methods are deep learning-based, which train convolutional neural networks to learn the mapping from image to segmentation [4]–[10]. They are computationally faster due to the use of GPUs and the one-pass inference process.

State-of-the-art segmentation methods can achieve an average Dice overlap metric of 86.9% for normal pancreas [4]. However, for abnormal pancreas, the Dice metric can be as low as 38.4% [4]. This demonstrates the technical challenges in pancreatic image segmentation. The challenges are attributed to several factors. First, the pancreas is small compared to other abdominal organs, occupying only a small proportion of the 3D field-of-view. Neural networks are less sensitive to small objects due to the class imbalance problem. Second, the pancreas is highly variable in anatomical shape and appearance. Its anatomy is altered by ageing, which causes atrophy, lobulation and fatty degeneration. For pathological cases, the anatomy can also be significantly influenced by cysts and tumours. Third, the training of neural networks requires large datasets. Available training data with manual annotations are often limited in clinical scenarios.

To address these challenges, we propose a synergistic representation learning approach for pancreatic image segmentation to improve both the robustness and accuracy. The synergy will come from multiple aspects. 1) Synergy between scales: Multi-scale semantic information will be incorporated in a joint and coarse-to-fine fashion. 2) Synergy between image features and anatomical priors: Anatomical shape priors will be learnt to improve segmentation robustness. 3) Synergy between data: Fully-labelled (multi-organ annotation), partially-labelled (pancreas-only annotation) and unannotated data will be utilised for semi- and partially-supervised learning. 4) Synergy between modalities: Both CT and MR modalities will be explored for semantic feature learning. 5) Synergy between computer and human. Abnormal cases and hard examples will be detected for human to review and annotate to enable human-in-the-loop learning.

The output of the project will be an automated tool that can be applied to large-scale datasets for analysis of pancreatic imaging phenotypes. The expected candidate’s background is engineering, computing or physical sciences.


  1. CRUK, “Pancreatic cancer statistics.” [Online]. Available:

  2. W. Muhammad et al., “Pancreatic cancer prediction through an artificial neural network,” Front. Artif. Intell., vol. 2, 2019.

  3. R. Wolz, C. Chu, K. Misawa, M. Fujiwara, K. Mori, and D. Rueckert, “Automated abdominal multi-organ segmentation with subject-specific atlas generation,” IEEE Trans. Med. Imaging, vol. 32, no. 9, 2013.

  4. Z. Zhu, Y. Xia, L. Xie, E. K. Fishman, and A. L. Yuille, “Multi-scale coarse-to-fine segmentation for screening pancreatic ductal adenocarcinoma,” in MICCAI, 2019.

  5. Y. Zhou et al., “Prior-aware neural network for partially-supervised multi-organ segmentation,” in ICCV, 2019.

  6. V. V. Valindria et al., “Small organ segmentation in whole-body MRI using a two-stage FCN and weighting schemes,” in MICCAI MLMI Workshop, 2018.

  7. O. Oktay et al., “Attention U-net: Learning where to look for the pancreas,” in MIDL, 2018.

  8. H. R. Roth, L. Lu, A. Farag, A. Sohn, and R. M. Summers, “Spatial aggregation of holistically-nested networks for automated pancreas segmentation,” in MICCAI, 2016.

  9. A. T. Bagur, G. Ridgway, J. McGonigle, S. M. Brady, and D. Bulte, “Pancreas segmentation-derived biomarkers: volume and shape metrics in the UK Biobank imaging study,” in MIUA, 2020, pp. 131–142.

  10. P. Hu et al., “Automatic pancreas segmentation in CT images with distance-based saliency-aware DenseASPP network,” IEEE J. Biomed. Heal. Informatics, pp. 1–1, 2020.

Figure 1: Zoomed view of the pancreas
Figure 1: Zoomed view of the pancreas.

Back to Projects