We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Machine Learning Model Can Flag Abnormal Brain Scans

Machine Learning Model Can Flag Abnormal Brain Scans content piece image
Credit: Anna Shvets/ Pexels
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 2 minutes

Researchers from King's College London have developed a deep learning framework based on convolutional neural networks to flag clinically relevant abnormalities at the time of imaging, in minimally processed, routine, hospital-grade axial T2-weighted head MRI scans. Their results were published in Medical Image Analysis.


The work was motivated by delays in reporting of scans in hospitals. A growing national and international demand for MRI scans, alongside a shortage of radiologists, together have led to an increase in the time taken to report head MRI scans in recent years.


Delays cause the knock-on effect that it takes longer for the correct treatment to be given to patients, and therefore poorer patient outcomes and inflated healthcare costs.


Lead author Dr David Wood, Research Associate from King's College London, said: "Our model can reduce reporting times for abnormal examinations by accurately flagging abnormalities at the time of imaging, thereby allowing radiology departments to prioritise limited resources into reporting these scans first. This would expedite intervention by the referring clinical team."


In a simulation study with retrospective data from King’s College Hospital (KCH) and Guy’s and St Thomas’ NHS Foundation Trust (GSTT), the researchers found that their model reduced the wait times for reports for patients with abnormalities by about two weeks from 28 days to 14 days and from 9 days to 5 days.


The current achievements are underpinned by a recent model which addresses one existing problem blocking overarching developments in the application of deep learning to imaging: the difficulty in obtaining large, clinically representative, accurately-labelled datasets.


Whilst accessing large hospital datasets is achievable, the data are usually unlabelled. The deep learning framework based on convolutional neural networks used in the current study to flag clinically relevant abnormalities at the time of imaging, could not have been developed without this earlier work which allowed head MRI dataset labelling at scale.


In the current paper, another step forwards towards clinical translation is that the researchers use routine, hospital-grade axial T2-weighted head MRI scans which have undergone little processing before triage analysis.


This means head MRI scans can be used in the form that they arrive from the scanner which both cuts down from minutes to seconds the time that would otherwise be spent processing the images, but also allows more abnormalities to be detected in other areas captured by the head MRI – such as diseases in the skull, and around the eyes and nose. The speed and coverage of the abnormality detection system enables real-time applications.


Senior author, Dr Thomas Booth, Senior Lecturer in Neuroimaging at King's College London, said: "Having previously built and validated a labelled head MRI dataset using cutting edge machine learning methodology through a team of data scientists and hospital radiologists, the same team have now built and validated a new machine learning model that can triage head MRI scans so the abnormal scans can be at the front of the queue for reporting. The potential benefit to patients and healthcare systems is enormous.”


A recent grant will enable further finessing of the model and accelerate translation to the clinic.


Reference: Wood DA, Kafiabadi S, Busaidi AA, et al. Deep learning models for triaging hospital head MRI examinations. Med Image Anal. 2022;78:102391. doi: 10.1016/j.media.2022.102391


This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.