We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

AI-Powered Voice Analysis for Screening Anxiety and Depression

Doctor holding patient’s hand in a supportive gesture during a medical diagnosis.
Credit: iStock.
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 2 minutes

Researchers from the National Center for Supercomputing Applications and the University of Illinois College of Medicine Peoria have developed an automated method to screen for anxiety and depression using short voice recordings. The study, published in Journal of the Acoustical Society of America Express Letters, demonstrates the potential of acoustic voice analysis to identify individuals with comorbid anxiety and depression using machine learning.

Using speech to identify mental health conditions

The project used acoustic and phonemic features of speech collected during semantic verbal fluency tests. Participants were asked to complete a one-minute naming task, during which researchers recorded and analyzed their speech. These speech samples were then used to train machine learning models capable of distinguishing between individuals with both anxiety and depression, and those without known mental health conditions.


Semantic verbal fluency test

A short cognitive assessment where individuals are asked to name as many items as possible from a category (e.g. animals) within a limited time. It is often used in neuropsychological evaluations.

Acoustic features

Characteristics of sound, such as pitch, volume and duration, that can be measured and analyzed to assess aspects of speech or detect anomalies linked to health conditions.

Phonemic analysis

The study of speech sounds and their patterns. In this context, it involves analyzing how depression and anxiety may alter the pronunciation or structure of speech.


The custom dataset used for model training included participants with a range of depression and anxiety severity levels. People with other conditions that could influence speech, such as neurological disorders, were excluded to maintain model specificity.

Screening gaps in mental health care

Anxiety and major depression are among the most common mental health disorders in the United States, affecting 19.1% and 8.3% of adults respectively. Despite their high prevalence, many people remain undiagnosed and untreated. Barriers such as social stigma, limited access to healthcare, financial constraints and low recognition of need contribute to low screening and treatment rates.

Want more breaking news?

Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.

Subscribe for FREE

The study’s authors suggest that voice-based screening, which can be implemented via web platforms, mobile applications or in clinics, could help reduce these barriers. By offering a non-invasive, easily deployable tool, the technology enables more people to be screened in a timely and scalable manner.

Explainable AI enhances clinical value

One of the key features of the models developed in the study is their explainability. This means that the algorithms not only identify likely cases of comorbid depression and anxiety but also provide interpretable outputs about the speech features that led to their conclusions. This could offer clinicians insights into how these disorders manifest in language and speech patterns.


Explainable AI

Artificial intelligence systems designed to provide human-understandable reasons for their outputs. This feature is critical for clinical settings where interpretability supports decision-making.


The researchers highlighted the clinical implications of this technology for routine mental health screening and ongoing monitoring. By incorporating voice-based assessment into existing healthcare systems, providers may gain a low-cost and reliable option to expand screening coverage and tailor interventions to individuals’ needs.


Reference: Pietrowicz M, Cunningham K, Thompson DJ, et al. Automated acoustic voice screening techniques for comorbid depression and anxiety disorders. JASA Express Letters. 2025;5(2):024401. doi: 10.1121/10.0034851


This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.


This content includes text that has been generated with the assistance of AI. Technology Networks' AI policy can be found here.