We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Artificial Intelligence in Medical Imaging

Artificial Intelligence in Medical Imaging content piece image
Image credit: IDx
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 5 minutes

Artificial intelligence (AI) and machine learning are slowly infiltrating our daily lives: think facial recognition in photos, Alexa and voice command, and the tailored advertisements that appear on the websites you browse. AI is likely to become increasingly integrated with technologies as they are released over time, and healthcare is no exception. One sector which is set to benefit the most from these advances, is medical imaging.

Why is AI suddenly knocking on medicine’s door?

You could be forgiven for thinking that the recent acceleration in AI-driven image processing is due to modern developments in hardware and algorithms. While this is partly true, algorithms are nothing new. Although the fancy hardware wasn’t around in the 1960s, this didn’t stop Dr. Edward Shortliffe, a pioneer in the use of AI in medicine. Shortliffe’s PhD thesis (Stanford University) explored the MYCIN system (a nod to antibiotic names, many of which have the suffix '-mycin'), which he devised to assist and educate physicians who need advice about selecting appropriate antibiotic therapy. Shortliffe reached a sticking point and uncovered the real handbrake for AI development which still stands today; noisy data.

Dr. Michael Abramoff, retinal specialist and MD Professor in Retina Research at the University of Iowa, explained:

“The real problem is noisy data. So many of the advances that we see now are because you have more objective data, and more objective sensors. And so that’s why it’s so interesting - image and sound data is very objective. What it is much more difficult, is to use AI where people have to explain their symptoms in words, and then someone converts the patients’ words into text. You know, a lot of internal medicine is like that.”

If having enough high-quality objective data is the key to AI success (vast improvements in computing power and storage have certainly helped too), it’s no surprise that medical imaging is leading the way in the AI-healthcare field.

Walk before you can run: the digitalization of pathology laboratories

According to a recent histopathology workforce survey by the Royal College of Pathologists (UK), pathology requests are increasing by nearly 5% almost every year, and a third of pathologists in the nation will reach retirement age in the next five years. Digital pathology has the potential to improve workflows in several ways, which should help relieve the pressure for pathologists, and improve health outcomes for patients:

  • Slides can be referred to off-site pathologists for a second opinion, aiding faster diagnosis.
  • Histology slides can be projected onto larger screens, making it easier for pathologists to navigate around the slide.
  • Digital images can be saved systematically which allows for easy access and reduces the chance of slides getting mixed up.
  • Recent breakthroughs in artificial intelligence promise to fundamentally change the way we detect cancer.

Dr. Bethany Williams at the Leeds Pathology Powerwall

To ensure they are ready for the inevitable advances in AI technology, the Leeds Teaching Hospital made the decision to scan every glass slide they produce – a critical milestone in the UK diagnostics world. Chloe Lockwood, Lead Biomedical Scientist for the Digital Pathology Project, elaborated on the transition from a classical to a digital workflow:

“Going digital is a massive change management piece and requires support from the entire department - laboratory, pathologists, IT, and management. However, implementing digital pathology provides the opportunity for the entire diagnostic pathway to be evaluated and streamlined.”
The pathology laboratory at Leeds is now positioned to take advantage of advances in AI as they arise.

The future is now: artificial intelligence detects signs of diabetic retinopathy

As an ophthalmologist, Dr. Abramoff has seen first-hand the potential benefits of AI in healthcare. He noticed how much time he spent screening people for diabetic retinopathy who did not have the disease, while people who were going blind had to wait months to be diagnosed:

“Autonomous AI has tremendous potential to lower healthcare costs, improve quality, and make it more accessible - where patients are, rather than where the specialist doctor is.”

Consequently, Abramoff has made this vision a reality and founded IDx, the first and only FDA-authorized AI system for the autonomous detection of diabetic retinopathy (the leading cause of blindness in American adults). The technology is fully autonomous, meaning that the final clinical decision (retest in 12 months, or refer to an eye care professional) lies in the hands – or the code – of the device. Given that vision loss can be prevented with early disease detection, getting an eye exam is highly important for people with diabetes.

Reflecting on his favorite situation where this specialist diagnostics technology was implemented into primary care, Abramoff speaks of a diabetes clinic in New Mexico, close to the Mexican border.

“There was a really good diabetes clinic, which had no way of dealing with the diabetes eye exam – and people who were losing vision. We came in there, put the AI system in – which is a camera with AI – and trained mostly nurses and some techs. We trained for four hours, and we left. Now they have this ‘better-than-me-quality’ AI diagnostic right there. All the patients go through it, and the nurses are so happy that they can finally take proper care of their patients.”

This situation exemplifies the huge potential for AI to improve healthcare for patients forever, following a relatively short-term training investment.

Can we trust AI in medicine?

Unsurprisingly, the thought of putting clinical decisions in the power of computers is enough to raise a few eyebrows. In fact, Abramoff said his nickname among his colleagues used to be ‘The Retinator.’

Reviewing algorithms for AI is a completely different ball-game to reviewing more ‘traditional’ scientific approaches and presumably, some AI methodologies will never go through the peer review process. So, how is quality control in AI addressed? To this question, Abramoff makes several key points:

  • It is okay if AI is never rigorously tested if there is no risk of harm to anyone. The problems start when there is a risk of harm – e.g. to a patient.
  • There is an ongoing replication crisis in science that was not prevented by peer-review, but there are better methods to ensure safety, such as preregistration of studies.
  • For autonomous AI, patient safety needs to be proven with preregistered prospective clinical trials, in the same environment it will be used in if proven safe.

This latter point is key. While it may seem blindingly obvious, AI has not always been tested in a suitable way.

For example, the FDA-approval of a computer-aided mammogram test in 1998 was followed by significant uptake of computer-aided detection in clinical practice – largely because its use could be reimbursed by Medicare. While the program was tested in comparison to a radiologist, it was approved for use as an aid for radiologists.

“There may be unexpected side effects from using AI in combination with a human – that you do not anticipate… The fact that AI works really well doesn’t mean that it works really well in combination with a human expert.” – Dr. Abramoff
Consequently, the clinical benefit of this technology was not convincing. According to a multicenter, retrospective study of 43 mammography facilities in the USA, the use of this computer software was associated with significantly higher false positive rates, recall rates, and biopsy rates, with significantly lower overall accuracy in screening.

Could increased recall rates be a necessary cost of improved cancer detection? As emphasized by the authors, the benefits of true positive results must be weighed against the consequences of false positive results, including associated economic costs.

As Abramoff points out, the ‘autonomous vs assisted’ dilemma is also found in the world of autonomous cars:

“The best autonomous cars are fully autonomous. And they tried to introduce it by assisting the driver and then the drivers can actually make errors, and then what do you do? And so, it’s a serious problem everywhere... do we do it autonomously, or do we have human-assisted (AI)?”

So, can we trust AI in medicine?

The consensus is yes, but with a few conditions: AI-development must come hand-in-hand with transparency, a safety-first attitude and testing that is relevant to the environment in which it will be used.