Skip to main content
Diagnostic imaging

Diagnostic imaging

Deep learning speeds triage of brain scans

16 Mar 2022 Tami Freeman
Automated head MRI triage
Automated triage: A CNN-based triage tool suggests the order in which head MRIs are reported by a radiologist. Lower panels show example MRI scans with abnormalities and heatmaps of the image regions that most influence the model's predictions. (Courtesy: CC BY 4.0/Med. Image Anal. 10.1016/j.media.2022.102391)

Magnetic resonance imaging (MRI) is a central tool in the diagnosis and management of neurological conditions. The growing demand for head MRI exams, however, combined with a global shortage of radiologists, has produced year-on-year increases in the time taken to report scan findings. Delays in reporting can cause delays in treatment, resulting in poorer patient outcomes and increased healthcare costs. Backlogs arising from the global COVID-19 pandemic are likely to exacerbate the problem further.

A team headed up at King’s College London hopes that deep learning models could address this bottleneck. The researchers have developed a deep learning framework, based on convolutional neural networks (CNNs), that can identify abnormalities in MRI scans. They describe their approach in Medical Image Analysis.

“Our model can reduce reporting times for abnormal examinations by accurately flagging abnormalities at the time of imaging, thereby allowing radiology departments to prioritize limited resources into reporting these scans first,” says lead author David Wood in a press statement. “This would expedite intervention by the referring clinical team.”

Generating the training data

Deep learning methods, and CNNs in particular, show considerable promise for analysis of medical images. But such models require large and relevant datasets for training. To date, the development of a model for assessing head MRIs has been limited by the difficulty in obtaining large, clinically representative, accurately labelled datasets for model training.

Wood and colleagues addressed this problem by developing a natural language processing (NLP)-based report classifier that derives labels from radiology text reports and assigns them to the corresponding MRI exams.

In this latest study, the researchers used their transformer-based neuroradiology report classifier to categorize more than 70,000 head MRI exams from two large UK hospital networks as normal or abnormal, generating a labelled dataset for model training. This large dataset allowed the deep learning models to be trained on the full range of likely clinical abnormalities (such as vascular abnormalities, tumours, haemorrhage  or stroke, for example), as well as different scanner vendors, imaging protocols and patient populations.

It’s also important that models are optimized for the types of head MRI scans routinely used in clinical examinations. As such, the researchers focused on abnormality detection using axial T2-weighted scans, which are the most commonly acquired sequence for detecting pathology in clinical settings worldwide.

The team trained and tested a number of CNN-based computer vision models on different subsets of the NLP-labelled data. In each case, the data were split into training (85%) and validation (15%) sets, with final model evaluation always performed on a test set of images with reference-standard labels assigned by neuroradiologists.

The models achieved fast (less than 5 s) and accurate classification from axial T2-weighted scans in all training and testing combinations. Noise-correction led to a small but statistically significant improvement in all cases. Importantly, models trained on scans from one hospital and tested on scans from the other showed minimal reduction in classification performance, suggesting that they are suitable for neuroradiology departments in general.

As a secondary goal, the team investigated the added value of using axial diffusion-weighted scans, which are also performed in many head MRI exams. A model trained and tested using all available scans achieved accurate classification using diffusion-weighted scans alone (although it performed worse than an axial T2-weighted model trained and tested on the same scans). However, an ensemble model that averaged the predictions of the two models provided a small improvement in classification.

Clinical impact

To quantify the impact that the classification model could have in a real clinical setting, the group performed a simulation study using out-patient MRI exams performed at the hospitals in 2018. They showed that the best model would reduce the mean wait times for reports for patients with abnormalities from 28 to 14 days, and from nine to five days, at the two hospital networks. This demonstrates the potential for using the model as an effective clinical triage tool.

Further easing clinical translation, the models are optimized for use with raw, clinical-grade scans. Avoiding image pre-processing enables faster classification and allows detection of abnormalities in other areas captured by the head MRI – such as diseases in the skull, and around the eyes and nose. The models also automatically provide slice-wise and voxel-wise visualization of the image regions that most influenced predictions, providing interpretability and enabling real-time review of triage decisions.

Having previously built and validated a labelled head MRI dataset using cutting edge machine learning methodology through a team of data scientists and hospital radiologists, the same team has now built and validated a new machine learning model that can triage head MRI scans so the abnormal scans can be at the front of the queue for reporting,” says senior author Thomas Booth.The potential benefit to patients and healthcare systems is enormous.”

The team is now running a UK-wide prospective clinical study to evaluate the efficacy of the tool in other institutions arounds the country. “This is an important step before we can consider applying this model in a real hospital setting,” Wood tells Physics World.

Copyright © 2024 by IOP Publishing Ltd and individual contributors