Back to Projects

AI-enabled Imaging

Active and continual learning strategies for deep learning assisted interactive segmentation of new databases

Project ID: 2020_028

Student: Theodore Barfoot

1st Supervisor: Tom Vercauteren, King’s College London
2nd Supervisor: Ben Glocker, Imperial College London
Clinical Champion: Jonathan Shapey, University College London

Aim of the PhD Porject:

  • Develop interactive deep learning approaches to continually segment databases of images for which no previously annotated training databases exist
  • Design active learning strategies to retrieve on-the-fly, cases whose manual segmentation will be the most informative for continual learning
  • Create annotation tools that support accelerated adoption of AI for new applications.

Project Description / Background:

Contemporary progresses in machine learning and artificial intelligence have permitted the development of tools that can assist clinicians in exploiting and quantifying clinical data including images, textual reports and genetic information. State-of-the-art algorithms are becoming mature enough to provide automated analysis when provided with enough high-quality training data and when applied to well-controlled clinical studies and trials [1], [2]. It is clear though that producing manual voxel-accurate medical image segmentation labels is tedious, time-consuming and costly as it usually requires profound radiological expertise. Data annotation is often a rate-limiting factor for the development of application-specific deep learning based image segmentation solutions.

In this project, we will focus on designing machine learning approaches to assist and accelerate the manual segmentation of structures of interest across a database, potentially starting from scratch. Adapting deep learning to support new applications while reducing the burden required to collect and annotate datasets for training purposes remains an active research area [7]. This topic shares challenges with domain adaption [3], for example when trying to limit the amount of new annotations required when new generations of scanners are being rolled out. Naively applying a pre-trained model to an imaging source that may slightly differ from the one used to acquire the training data set on which the model was trained indeed often results in dramatic failures. New annotations are often required to confidently bridge the domain gap and validate the performance of domain adaptation techniques.

In such cases, clinicians are typically left with fully manual or generic interactive methods to delineate structures of interest. Interactive deep learning methodologies are emerging to combine rich prior knowledge embedded in retrospective data from previous patients with as-sparse-as-possible annotations provided by clinicians [4], [5]. Yet, these techniques do currently not continue to learn and improve when being used for new cases. Concurrently, algorithms have been designed to exploit weak labels annotated across a data set to train deep neural networks [6]. Again, these methods require manually segmented ground truth for validation purposes and do not learn for being presented with new cases.

This project will consider the problem of gradually annotating an image segmentation dataset with potentially only very little prior knowledge. This is a timely research question which very few machine learning works have considered so far as illustrated in a recent review [7]. Industrial companies are starting to address it by optimising classical interactive segmentation (see e.g. ImFusion Labels) or restricting the learning to a fixed set of pre-defined organs (see e.g. NVIDIA Fast AI Assisted Annotation). A pioneer in this field is Cosmonio who has built a product for interactive deep learning, NOUS AI. This PhD project will provide the opportunity for a close collaboration with Cosmonio, for example, via a research placement.

Building on the supervisors’ experience, notably in interactive deep learning and domain adaptation, this project will explore continual learning to create a deep interactive learning segmentation model whose task performance increases as new images are being annotated. This will be coupled with active learning strategies to accelerate the training and reduce the annotation time for the end-user to segment the entire database. Vestibular Schwannoma segmentation [8] will be used as a motivating application during the project.


  1. A. L. Simpson et al., “A large annotated medical image dataset for the development and evaluation of segmentation algorithms,” arXiv:1902.09063, 2019.
  2. E. Gibson et al., “NiftyNet: a deep-learning platform for medical imaging,” Comput. Methods Programs Biomed., vol. 158, pp. 113–122, 2018.
  3. K. Kamnitsas et al., “Unsupervised Domain Adaptation in Brain Lesion Segmentation with Adversarial Networks,” in Information Processing in Medical Imaging, 2017, pp. 597–609.
  4. G. Wang et al., “DeepIGeoS: A deep interactive geodesic framework for medical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 7, pp. 1559–1572, Jul. 2019.
  5. G. Wang et al., “Interactive medical image segmentation using deep learning with image-specific fine-tuning,” IEEE Trans. Med. Imag., vol. 37, no. 7, pp. 1562–1573, Jul. 2018.
  6. M. Rajchl et al., “DeepCut: Object Segmentation From Bounding Box Annotations Using Convolutional Neural Networks,” IEEE Trans. Med. Imaging, vol. 36, no. 2, pp. 674–683, Feb. 2017.
  7. S. Budd et al., “A Survey on Active Learning and Human-in-the-Loop Deep Learning for Medical Image Analysis,” arXiv preprint arXiv:1910.02923 (2019)
  8. G Wang et al., “Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss,” In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 264-272. Springer, Cham, 2019.

Back to Projects