1st Supervisor: Dr Tomoki Arichi, King’s College London
2nd Supervisor: Prof Jo Hajnal, King’s College London
Aims of the Project
- Establish seamless integration between visual attention, interaction, and stimulus in a VR environment
- Establish robust routes of information exchange and control systems between VR and MRI
- Explore adaptations needed for vulnerable subjects, ie, children
- Perform an VR enabled adaptive fMRI study of higher cognitive functioning
Lay Summary
Functional MRI (fMRI) is currently the tool of choice for studying activity in the human brain. However, over the last 30 years, the typical fMRI experimental design consisting of identifying how the MR signal changes as the subject performs a rigid structured task has not changed. Whilst this approach has provided numerous insights into brain activity and its intrinsic organisation, the constraints imposed by the MR environment and hardware mean that current approaches consist of a crude juxtaposition of tightly controlled stimulus presentation, response monitoring and imaging data. This limits inference about complex facets of human behaviour, as it constrains the range of cognitive skills that can be studied and to subjects who can understand and follow instructions. This is compounded by the noisy and claustrophobic MRI scanner environment, which can induce stress and makes it challenging to image vulnerable populations like children.
With these factors in mind, we have developed a novel MR compatible virtual reality (VR) system which can provide users with an interactive simulated environment whilst lying inside the MRI scanner. Subjects are fully immersed in the visual environment via an MR compatible projector placed inside the scanner bore projecting directly into a VR headset and receive auditory stimulation via active noise-cancelling headphones. A key feature is a pair of MRI compatible cameras inside the headset that provide real-time information about visual behavior and head position. Together, this means that subjects are not only fully immersed in a new environment, but also that visual engagement and attention can be characterized using a robust gaze estimation algorithm. In addition, it allows subjects to use their gaze as an intuitive and natural means of communication as they would in their daily lives.
By simultaneously acquiring fMRI data whilst a subject naturally explores and interacts with the VR environment, there is huge potential for this to be a transformative new platform for fMRI-based interrogation of brain activity which at its core, overcomes the aforementioned limitations inherent to traditional rigid study designs. This project will focus on the fusion of the novel VR system and environment with fMRI, through developing a comprehensive framework that can precisely combine two rich, but independent, data streams. This will involve combining creative engineering and physics solutions with state-of-art signal processing, computer vision and machine learning methods. This will then allow precise characterisation of the patterns of brain activity involved in eye movement control, spatial navigation, and visual and auditory processing. In addition, it will enable for the first time, detailed fMRI studies of the brain processing underlying fundamental (but hitherto poorly understood) higher level cognitive processes including attention, social communication, memory, and reward behaviour in adults and children. Furthermore, using real-time fMRI analysis, patterns of brain activity can also be fed back into the VR environment and patterns of stimulation adapted to reinforce or modulate the activity further. Together, this will not only provide marked new insight into the brain processing underlying complex human behaviour, but has enormous implications for understanding conditions such as autism.