Artificial intelligence (AI) holds real potential for improving both the speed and accuracy of medical diagnostics. But before clinicians can harness the power of AI to identify conditions in images such as X-rays, they have to ‘teach’ the algorithms what to look for.
Identifying rare pathologies in medical images has presented a persistent challenge for researchers, because of the scarcity of images that can be used to train AI systems in a supervised learning setting.
Professor Shahrokh Valaee in The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE) and his team have designed a new approach: using machine learning to create computer generated X-rays to augment AI training sets.
“In a sense, we are using machine learning to do machine learning,” says Valaee. “We are creating simulated X-rays that reflect certain rare conditions so that we can combine them with real X-rays to have a sufficiently large database to train the neural networks to identify these conditions in other X-rays.”
Valaee is a member of the Machine Intelligence in Medicine Lab (MIMLab), a group of physicians, scientists and engineering researchers who are combining their expertise in image processing, artificial intelligence and medicine to solve medical challenges. “AI has the potential to help in a myriad of ways in the field of medicine,” says Valaee. “But to do this we need a lot of data — the thousands of labelled images we need to make these systems work just don’t exist for some rare conditions.”
To create these artificial X-rays, the team uses an AI technique called a deep convolutional generative adversarial network (DCGAN) to generate and continually improve the simulated images. GANs are a type of algorithm made up of two networks: one that generates the images and the other that tries to discriminate synthetic images from real images. The two networks are trained to the point that the discriminator cannot differentiate real images from synthesized ones. Once a sufficient number of artificial X-rays are created, they are combined with real X-rays to train a deep convolutional neural network, which then classifies the images as either normal or identifies a number of conditions.
“We’ve been able to show that artificial data generated by a deep convolutional GANs can be used to augment real datasets,” says Valaee. “This provides a greater quantity of data for training and improves the performance of these systems in identifying rare conditions.”
The MIMLab compared the accuracy of their augmented dataset to the original dataset when fed through their AI system and found that classification accuracy improved by 20 per cent for common conditions. For some rare conditions, accuracy improved up to about 40 per cent — and because the synthesized X-rays are not from real individuals the dataset can be readily available to researchers outside the hospital premises without violating privacy concerns.
“It’s exciting because we’ve been able to overcome a hurdle in applying artificial intelligence to medicine by showing that these augmented datasets help to improve classification accuracy,” says Valaee. “Deep learning only works if the volume of training data is large enough and this is one way to ensure we have neural networks that can classify images with high precision.”
This article has been republished from materials provided by the University of Toronto. Note: material may have been edited for length and content. For further information, please contact the cited source.