Medical Image Analysis—Can a Computer Diagnose Diseases?
Medical Image Analysis—Can a Computer Diagnose Diseases?
Complete the form below and we will email you a PDF version of "Medical Image Analysis—Can a Computer Diagnose Diseases?"
In 2017, Stanford computer scientists reported their successes with a computer vision tool that utilizes artificial intelligence to diagnose skin abnormalities, an early potential indicator of skin cancer.
They are not the first to employ computer vision in the medical field, nor should they be the last. Let’s learn more about the impact of AI on diagnostics and the future of this tech that is bound to change our lives.
How computer vision works
A computer “sees” images differently to us. Where we may look at a picture of a wooden structure and use certain contextual information stored within our brains to confirm it is a house, a computer will only see a series of numbers that define the technical elements of this image. Without additional context, it cannot confirm it is a house.
Adding this context is a job for artificial neural networks and other machine learning models used in computer vision. These models are trained on image data sets, which are typically enormous, containing hundreds of thousands, if not millions, of images.
By “training,” here we mean the use of sophisticated mathematical functions and algorithms that the AI software applies to data sets in order to extract meaningful patterns from digitized images (i.e., specific features corresponding to a particular type of hip fracture). One of the most successful approaches to computer vision by now is a class of deep learning models called convolutional neural networks (CNNs).
After the training is over, the CNN “knows” these patterns and now can look for them in new images that it hasn’t seen yet. For example, it can discover the “hip fracture pattern” in a medical X-ray image and label it accordingly. This may seem similar to the way humans learn and apply their knowledge—but don’t be tricked. The pattern we are talking about is ultimately pure math coded in zeroes and ones—not something you can actually see.
It’s also worth noting that a CNN’s “intelligence” is limited to those data sets on which it is trained. This means that slight movements, alternation, or poor quality of an image could make it more difficult to classify, and the results will become less accurate.
Medical use cases for the technology
Computer vision consultants currently develop such solutions for a wide range of medical imaging needs. The technology can work with various types of medical imaging—CT, MRI, PET, ultrasound, and X-ray.
Since their commercial medical introduction in the early 1960s, ultrasound scans have been used in many field of medicine. They have proved instrumental in diagnosing congenital disabilities in foetuses. By incorporating computer vision technologies, ultrasound specialists could more easily detect abnormalities on a scan which could be missed by the naked eye, and thus increasing the success rate of identification.
One commercial example of computer vision applied to ultrasound is the Philips AI Breast. This solution helps the sonographer in diagnosis by emphasizing key anatomical landmarks and labeling the areas that might include pathologies.
Magnetic resonance imaging uses a magnetic field to visualize and detect issues within the body, particularly in the joints, tissues, and blood vessels, where it is difficult for an X-ray to get a clear image. Equipping MRI with computer vision could allow doctors to process the acquired images faster and be alerted to minute irregularities missed by the naked eye. In the case of MRIs, this may be essential when diagnosing aneurysms and blood vessel issues at earlier stages to treat them accordingly.
A research paper by Osaka City University Hospital suggests the use of deep learning for automated detection of cerebral aneurysms. The computer vision algorithm used by the Japanese researchers detected aneurysms with a sensitivity of 91%–93% and improved aneurysm detection by 4.8%–13%.
Computed tomography is often used to identify the exact location of a tumor or to investigate the results of a car accident, by checking for internal bleeding and organ damage. Adding computer vision technologies to the current techniques used in CT could potentially automate the process, making it easier and quicker for doctors to locate lesions and injury and increase their treatment success rates.
Credit: University of Central Florida, Karen Norum
Last year, engineers from the University of Central Florida trained a CNN to detect small lung tumors that are often missed out by doctors. The system is claimed to detect these small lesions with an accuracy of 95%, while humans identify this type of cancer with only 65% accuracy.
Your personal electronic diagnostician? Not yet.
While some of the results shown by medical image analysis systems, in particular those based on convolution neural networks, truly impress, we are not even close to relying upon them completely in diagnostics yet. The cost of even a single medical error is high, and machine learning algorithms are not perfect and usually excel only in highly focused tasks (such as detecting small lung lesions in the example above)—they cannot see the big picture. Besides, the most successful ML models are black boxes—which means their reasoning is hidden from our eyes, and thus we cannot trust them completely.
Until these and other problems of computer vision and AI in general are solved, we can still rely on medical image analysis systems as indispensable assistants that can speed up diagnostics and improve its accuracy.
Information about the author
Yaroslav Kuflinski is an AI/ML Observer at Iflexion. He has profound experience in IT and keeps up to date on the latest AI/ML research. Yaroslav focuses on AI and ML as tools to solve complex business problems and maximize operations.