Skip to main content
Diagnostic imaging

Diagnostic imaging

Deep neural networks track eye movements during MRI scans

25 Jan 2022 Arjan Singh Puniani 

Our eyes are considered windows to the soul. For scientists and physicians, the eyes provide access to memories, cognition and even neurological dysfunction. What our eyes fixate on, and how we maintain our gaze, may be diagnostic of impaired working memories, indicative of amnesia or even signal Parkinson’s disease.

The gold standard of modern human neuroimaging is functional MRI (fMRI), which uses strong magnetic fields to measure changes in blood flow, and therefore brain activity. The technique is noninvasive – not requiring any injections, ionizing radiation or surgery. Monitoring eye movement during imaging could add valuable information to fMRI studies and clinical routines alike. However, the magnetic environment imposes restrictions on equipment brought into the scanner, making eye tracking difficult.

MR-compatible camera-based eye trackers, such as the EyeLink 1000 Plus, can set research labs back around $40,000. As such, only 10% of fMRI studies published in the last two years employed eye tracking, and only half of these used the eye tracking to help interpret their results.

To address this low utilization of eye tracking, researchers at the Kavli Institute for Systems Neuroscience and the Max Planck Institute for Human Cognitive and Brain Sciences developed DeepMReye, a convolutional neural network that uses the MR signal from the eyes for eye tracking, without the need for a camera.

In six independent 3T-MRI datasets, their machine-learning algorithm learned to detect patterns in the MRI signal indicative of gaze position, and then to decode or reconstruct the corresponding viewing behaviour in data the algorithm had never seen before. Critically, this means that DeepMReye can perform eye tracking even in existing fMRI datasets, making it possible to address new research questions using a large and immediately available data resource. The team describe DeepMReye in Nature Neuroscience.

Markus Frey and Matthias Nau

Human eyes have difficulty extracting signals from noise in massive datasets – finding Waldo isn’t always easy for everyone. But machine-learning algorithms can tease out patterns from inscrutable tangles of complex data. Joint-first-authors Markus Frey and NIH neuroscientist Matthias Nau instructed their model to extract generalizable patterns from the eyeballs using dimensionality reduction techniques, and then to interpret these patterns in a large number of existing MRI datasets.

Here is how it works. When the eyes move, the MRI signal undergoes noticeable variations. To visualize these variations, the team first extracted the eyeball voxels and plotted the normalized signal intensity of those voxels as a function of gaze position. This made it clear that gaze position does indeed affect the eyeball MRI signal a lot.

The researchers then input the eyeball voxels into a convolutional neural network, which selects the relevant features before whittling down the input feature size into more manageable chunks for the underlying processors. The dimensionality-reduced input data then serve as the basis to train fully connected layers, or decoders, to reconstruct gaze positions.

The researchers examined the efficacy of their machine-learning algorithm by comparing DeepMReye data with results from a camera-based eye tracker. With 268 existing participants’ datasets loaded into the CNN, 90 with camera-based eye tracking, the scientists could confirm the high accuracy of their model and tune it accordingly.

The team applied the technique to fMRI datasets from participants performing various visual-related tasks, during which they either maintained fixation or executed pursuit or free-viewing tasks. The ability to apply this algorithm universally, requiring only the MRI information, enables the researchers to extract value from massive datasets that already exist.

Further, unlike MR-compatible eye-tracking cameras, DeepMReye accurately tracks gaze positions even when the eyes are closed, opening new possibilities for resting-state fMRI studies, or even studies of participants in rapid eye movement (REM) sleep. The algorithm could also be used to perform eye tracking in people with blindness, who traditionally have often been excluded from such research because cameras could not be calibrated accordingly. DeepMReye doesn’t need to know whether the patient is blind or not – the algorithm works on all individuals.

Scientists and doctors believe that eye tracking could prove invaluable for research, helping to inform studies exploring our visual or oculomotor systems. For example, pairing neuroimaging maps of whole-brain activity with precise measurements of eye movements from eye trackers can help researchers learn more about Alzheimer’s disease and other neural disorders.

deepMReye logo

What began as a weekend project is now an open source code downloadable on the researchers’ GitHub page. But when will eye-tracking become the gold-standard in fMRI research? Further advances could lead to better modelling, which becomes a robust readout of our behaviours. Since our eyes express our thoughts, goals and memories, this is yet another example of how artificial intelligence can help to replace expensive hardware with free software for the benefit of all. Ultimately, the research conducted with this new tool could help us understand better who we are.

Copyright © 2024 by IOP Publishing Ltd and individual contributors