We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Machine Learning Improves Detection of Ovarian Lesions

An ultrasound of a benign and a cancerous lesion, with tomography data used by the new machine learning algorithm.
(From left) The top row shows an ultrasound image of a malignant lesion, the blood oxygen saturation, and hemoglobin concentration. The bottom row is an ultrasound image of a benign lesion, the blood oxygen saturation, and hemoglobin concentration. Credit: Quing Zhu lab/Washington University in St. Louis
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

Although ovarian cancer is the deadliest type of cancer for women, only about 20% of cases are found at an early stage, as there are no real screening tests for them and few symptoms to prompt them. Additionally, ovarian lesions are difficult to diagnose accurately — so difficult, in fact that there is no sign of cancer in more than 80% of women who undergo surgery to have lesions removed and tested.


Quing Zhu, the Edwin H. Murty Professor of Biomedical Engineering at Washington University in St. Louis’ McKelvey School of Engineering, and members of her lab have applied a variety of imaging methods to diagnose ovarian cancer more accurately. Now, they have developed a new machine learning fusion model that takes advantage of existing ultrasound features of ovarian lesions to train the model to recognize whether a lesion is benign or cancerous from reconstructed images taken with photoacoustic tomography. Machine learning traditionally has been focused on single modality data. Recent findings have shown that multi-modality machine learning is more robust in its performance over unimodality methods. In a pilot study of 35 patients with more than 600 regions of interest, the model’s accuracy was 90%.


It is the first study using ultrasound to enhance the machine learning performance of photoacoustic tomography reconstruction for cancer diagnosis. Results of the research were published in the December issue of the journal Photoacoustics.


“Existing modalities are mainly based on the size and shape of the ovarian lesions, which do not provide an accurate diagnosis for earlier ovarian cancer and for risk assessment of large adnexal/ovarian lesions,” said Zhu, also a professor of radiology at the School of Medicine. “Photoacoustic imaging adds more functional information about vascular contrast from hemoglobin concentration and blood oxygen saturation.”


Yun Zou, a doctoral student in Zhu’s lab, developed a new machine learning fusion model by combining an ultrasound neural network with a photoacoustic tomography neural network to perform ovarian lesion diagnosis. Cancerous lesions of the ovaries can present in several different morphologies from ultrasound: some are solid, and others have papillary projects inside cystic lesions, making them more difficult to diagnose. To improve overall diagnosis of ultrasound, they added the total hemoglobin concentration and blood oxygenation saturation from photoacoustic imaging, both of which are biomarkers for cancerous ovarian tissue.


“Our results showed that the ultrasound-enhanced photoacoustic imaging fusion model reconstructed the target’s total hemoglobin and blood oxygen saturation maps more accurately than other methods and provided an improved diagnosis of ovarian cancers from benign lesions,” Zou said.


Reference: Zou Y, Amidi E, Luo H, Zhu Q. Ultrasound-enhanced Unet model for quantitative photoacoustic tomography of ovarian lesions. Photoacoustics. 2022;28:100420. doi: 10.1016/j.pacs.2022.100420


This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.