Aim of the PhD Project:
- Develop machine learning methods to detect and classify patterns of brain activity in simultaneous EEG-fMRI data.
- Develop novel methods for addressing artefacts inherent to EEG-fMRI data.
- Validate on ultra-high field (7 Tesla) imaging of epilepsy and studies of brain development in early infancy.
It is known that neural activity is highly dynamic across the day, rapidly evolves during early development, and is coordinated and distributed within large-scale networks across the brain. Importantly, abnormalities in these properties are thought to play a critical role in the pathophysiology underlying pervasive and difficult to treat neurological disorders such as epilepsy and neurodevelopmental conditions (ie, autism and ADHD). As such, there is a clear need to develop new tools capable of differentiating patterns of normal and abnormal brain activity, so as to accurately predict these disease states.
Studies of brain function are typically performed off-line, with data acquired using a single functional neuroimaging method, either functional Magnetic Resonance Imaging (fMRI) or electroencephalography (EEG). Each method has its limitations: fMRI offers high spatial resolution but low temporal resolution and measures brain activity indirectly – as changes in local blood flow and oxygenation; EEG offers high temporal resolution, but poor spatial resolution through direct recording of neuronal activity but from over the scalp, making it relatively insensitive to activity from deeper brain regions. Acquiring simultaneous EEG-fMRI data therefore holds great potential, as these strengths are complementary, meaning that in principle joint imaging can provide rich data with both high temporal and spatial resolution, with full brain coverage.
Unfortunately, analysis of simultaneous EEG-fMRI data is challenging due to artefacts made worse by the interaction of each imaging modality on the other, specifically the interaction of the magnetic and electric fields, which add noise to the EEG time series and geometric distortions to the fMRI. These artefacts are usually addressed with data discard if severe, or corrected off-line through simple regression or subtraction methods built into manufacturer software. Such methods are generic and lack the flexibility to adapt to bespoke applications such as (1) early human infancy, where massive changes in regional brain growth, tissue contrast, cardiovascular physiology, and behaviour create additional sources of age-dependent variation; and (2) ultra-high field (7 Tesla) acquisition where the potential for enormous gains in fMRI resolution are outweighed by inducing stronger artefacts in EEG-fMRI data.
The objective of this project is therefore to develop a tuneable, machine-learning framework for real-time event detection through artefact correction of simultaneous EEG-fMRI; with applications in modelling neurodevelopmental impairment and epilepsy.
Here, deep generative frameworks will be used to infer and unwarp the spatial distortions imposed on the fMRI and regress out motion and gradient artefacts from the EEG. Then (spatio)-temporal deep learning frameworks, such as LSTMS and transformers will be used to automatically classify patterns of brain activity, leveraging a large dataset of hand labelled data.
Together, these tools open up the exciting possibility that specific patterns of both normal and abnormal brain activity can be precisely detected in real-time, and directly related to behavioural states or pathological events. This crucial knowledge will help to guide treatment or act as a biomarker to help predict clinical outcomes.
The project would ideally suit a student a background in engineering, computer science, and/or machine learning and is motivated to develop their application to medical imaging and clinical neuroscience.