Back to Projects

AI-enabled Imaging

Restoring Sight with Image-Guided Regenerative Therapy Delivery

Project ID: 2021_023

1st Supervisor: Christos Bergeles, King’s College London
2nd Supervisor: Tom Vercauteren, King’s College London
Clinical Champion: Lyndon Da Cruz, King’s College London

Aim of the PhD Project:

Gene therapy and cellular therapy are emerging as transformative regenerative treatments for severe retinal diseases leading to blindness. The delivery of these treatments into specific and delicate retinal tissue layers, some as thin as 10-20um, requires precision that lies at and above the limit of human perception and dexterity. As such, retinal tissues need to be visualised via high-resolution imaging and dexterously manipulated via robotic assistance to optimise outcomes.

Our team is a pioneer on the development of micro-surgical robotic systems that operate under image guidance to offer dexterous assistance and decision support during the injection of therapies in the retina.

This PhD project will research unsupervised learning methods for keyframe and loop-closure in surgical scene tracking, alongside uncertainty estimation and imaging system control. The student will create a tracking framework and robotic imaging interface that stabilises the team’s robot at desired retinal locations for at least 60 seconds, to ensure prolonged injection of therapies with safe flow rates. Images are acquired via stereo biomicroscopy that visualises the retina en face, while intraoperative Optical Coherence Tomography provides high-resolution slices of the retinal layers.

During injection, a therapy bolus is created within the retina. The bolus manifests as a bulge on the 2D biomicroscopy images of the retinal surface and can be accounted for with frame-to-frame tracking. The tracking output can be fed to the robot controller to guide its tip, and to the iOCT controller for stable robotic imaging of the injection target.

Within this PhD project, the following will be explored:

  • (MRes year): Segmentation will identify the robotic tool to be controlled. Tool segmentation masks will be an additional output of our existing retinal flow network, emphasizing on multi-task learning approaches. Training will use synthetic datasets building on our existing and verified capacity and approach.
  • (PhD year 1): Keyframe identification will be added to the tracking framework. Keyframes are “anchor points” from which the algorithm creates parallel tracking branches that can be compared/merged for robustness. Research indicates that keyframes can be learned in an unsupervised fashion, which is key for medical imaging applications where annotation resources are scarce.
  • (PhD year 2): Loop closure and relocalisation will be researched. They pertain to understanding that a keyframe area has been already visited, therefore reducing drift and enabling the parallel tracking branches to merge. Relocalisation adds robustness to tracking failures, e.g. because illumination was off, and enables restarting from a known state.
  • (PhD year 2-3): Uncertainty handling: Estimating uncertainty associated with flow predictions will quantify the robustness of tracking and will allow to skip frames with high uncertainties e.g. due to excessive motion blur, to thereby reduce drift.
  • (PhD year 2-3): Robotic iOCT imaging: Tracking outputs will be fed to the iOCT scanning controller to stabilise layer visualisation at predefined therapy injection points, despite physiological and surgeon-induced motions to the eye.

The project suits a student with computer science background and an interest in deep learning and surgical interventions.

Example of the proposed tracking framework applied in a pair of sequential frames. Optical flow and tracking allows following points through the video stream, while tool segmentation ultimately enables robotic imaging using intraoperative Optical Coherence Tomography.

Figure 1: Example of the proposed tracking framework applied in a pair of sequential frames. Optical flow and tracking allows following points through the video stream, while tool segmentation ultimately enables robotic imaging using intraoperative Optical Coherence Tomography.

Back to Projects