Student: Jeremy Birch
Chip-on-tip steerable endoscopes are rapidly improving due to tandem innovation in robotics technology and highly-integrated cameras. With diameter less than 1.5mm, steerable endoscopes can navigate the human body to reach previously inaccessible locations. Safe and effective navigation, however, requires their simultaneous localisation and mapping (SLAM) as they traverse endoluminal cavities. The aim of this project is to research SLAM algorithms that successfully interpret images from chip-on-tip endoscopes in real-time, especially within deformable anatomy. The student will investigate novel direct SLAM approaches that interpret image texture without relying on salient features, which are hard to identify in anatomical images. They will be part of a growing image-guided robotics research team that develops next generation flexible robotic systems for regenerative medicine applications. The team’s primary research area is the navigation of the orbital cavity as an entry window to the brain – a novel clinical application giving rise to stimulating computer vision challenges.
Our perception of the world is primarily image based, and loss of sight is detrimental to quality of life. The population affected by ophthalmic pathologies leading to blindness is vast, with diseases affecting the photoreceptors/retina being exceptionally prevalent. Worldwide, more than 50% of sight-loss cases are attributed to age-related macular degeneration (AMD) (Congdon et al. 2004), while 16 million people suffer from retinal vein occlusion (RVO) (Rogers et al. 2010). With the shifting demographics and susceptibility of elderly to eye diseases, innovation in ophthalmic treatment is imperative.
We are developing an image-guided multi-arm flexible robot that can provide an ultra-minimally invasive approach to reaching the posterior part of the eye cavity for the deployment of regenerative medicine therapeutics and thrombolytic agents: instead of entering the eye, the proposed snake robot will navigate peri-ocularly, flexing around the eye globe, between the orbital muscles, to reach the area of treatment posteriorly.
Our research team is already developing this robot, and we are seeking a student with interest and competency in computer vision to complement our endeavour. After all, an integral component of steering robots in tight and sensitive spaces such as the orbital cavity is environmental perception through advanced navigation. Despite the wealth of pre-operative and intraoperative multi-scale multi-modal image data available for eye surgery, the attempts to process this data for safe and effective robot navigation are surprisingly limited.
This project will deliver a navigation platform based on the interpretation of images acquired from chip-on-tip cameras mounted on the developed flexible robot. These images will be leveraged in a Simultaneous Localisation and Mapping (SLAM) framework that accounts both for limited textures arising from anatomical images, but also for the inherently lower image resolution that chip-on-tip cameras provide.
SLAM involves the identification of robust features in the acquired images, and, through the observation of the motion of those features, creation of a 3D map of the environment while identifying the location of the camera within this map. Even though SLAM has been examined for endoscopic procedures, it is unexplored in the context of miniaturised cameras. Further, researchers have never had access to images arising from peri-ocularly navigated instruments. Hence, it is a topic of significant research potential that may lead to novel robust SLAM algorithms, potentially among the first to exploit the advances in Deep Learning in this research domain. Adding the requirement for addressing the issue of anatomical deformation, the proposed project will not only lead to exciting computer vision developments, but will endow the student with a solid set of skills to pursue a research career in industry or academia.