Back to Projects

Automated quality control and semantic parsing in multi-modal imaging

Project ID: 2015_XXX

Student: Rob Robinson

1st supervisor: Ben Glocker, Imperial College London
2nd supervisor: Daniel Rueckert, Imperial College London
Industry supervisor: Chris Page, GlaxoSmithKline

Quality control, so far, is a time-consuming and costly process that requires experts to visually inspect large amounts of image data in its entirety. This process is prone to errors and subjective quality assessment criteria cannot always guarantee a consistently standardised outcome. This is a particular issue in large-scale imaging studies where hundreds or even thousands of subjects are scanned.

Automated QC for imaging is still a relatively new area of research and little effort has been put, so far, into developing robust, computerised methods. On the other hand, quite a few approaches have been developed for semantic parsing of medical images. Semantic parsing is the process of automatically ‘understanding’ what is inside an image. For example, localisation and identification of all visible major organs is part of semantic parsing [1]. The automatic detection of contrast agent [2] and the extraction of the spinal column to obtain patient-specific coordinate systems [3,4] are other examples. Those methods rely on a similar framework of supervised machine learning where a set of images with expert annotations is used to train statistical predictors. Those predicators are then used to automatically obtain semantic information on new image data.

In this PhD project, the aim is to further develop methods for semantic parsing that are robust and efficient and can be applied to large sets of multi-modal image data. While a large body of previous work has been focused on CT images, an important extension is to make those methods work on MRI, Ultrasound and even functional imaging. Further, an effective mapping of semantic information to QC criteria needs to be developed. The hypothesis of this project is that fully understanding what and where structures of interest are visible in an image enables a reliable, objective, and cost-effective approach to automated QC for large-scale clinical trials and imaging studies.

[1] A. Criminisi, et al., Anatomy Detection and Localization in 3D Medical Images, in Decision Forests for Computer Vision and Medical Image Analysis, pp.193–209, 2013
[2] A. Criminisi, et al., A Discriminative-Generative Model for Detecting Intravenous Contrast in CT Images, in MICCAI, 2011
[3] B. Glocker, et al., Vertebrae Localization in Pathological Spine CT via Dense Classification from Sparse Annotations, in MICCAI, 2013
[4] B. Glocker, et al., Automatic Localization and Identification of Vertebrae in Arbitrary Field-of-View CT Scans, in MICCAI, 2012

Back to Projects