Brain Region Linked to Hearing Also Deciphers Speech Intent
A study finds that Heschl’s gyrus helps interpret speech pitch, reshaping how scientists understand language processing.

Complete the form below to unlock access to ALL audio articles.
A new study from Northwestern University’s School of Communication, the University of Pittsburgh and the University of Wisconsin-Madison has identified an unexpected role for a brain region long associated with early auditory processing. The research, published in Nature Communications, shows that Heschl’s gyrus, a part of the auditory cortex, does more than just process sounds – it translates pitch variations, known as prosody, into meaningful linguistic cues that help people interpret emphasis, intent and focus in speech.
Prosody
The rhythm, stress and intonation patterns in spoken language that help convey meaning beyond the literal words.
Heschl’s gyrus
A region in the auditory cortex involved in processing sound, now shown to play a role in speech melody interpretation.
Rethinking how the brain processes prosody
Traditionally, scientists have believed that prosody – a term describing pitch changes that convey meaning in speech – was primarily handled by the superior temporal gyrus, a brain region involved in speech perception. The new findings suggest that prosody is actually processed much earlier than previously thought, challenging long-standing models of speech comprehension.
Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.Want more breaking news?
“We’ve spent a few decades researching the nuances of how speech is abstracted in the brain, but this is the first study to investigate how subtle variations in pitch that also communicate meaning is processed in the brain.”
Dr. Bharath Chandrasekaran.
Superior temporal gyrus
A brain area known for processing spoken language and speech perception.
Researchers used high-resolution brain recordings from adolescents undergoing epilepsy treatment to measure brain activity while they listened to an audiobook recording of Alice in Wonderland. These recordings provided an unprecedented look at how pitch information is processed in real time.
Unique research approach in epilepsy patients
To gather this data, the team worked with 11 adolescent patients who had electrodes implanted deep in their brains as part of neurosurgical treatment for severe epilepsy. These electrodes allowed scientists to directly monitor neural activity in key speech-processing regions, offering a level of precision that non-invasive methods cannot achieve.
By tracking brain activity as the participants listened to natural speech, researchers found that Heschl’s gyrus encoded changes in pitch not just as raw sound but as structured linguistic information. The brain treated pitch accents – the slight variations in tone that change meaning in spoken language – separately from the sounds that make up words.
Pitch accents
Variations in pitch that signal emphasis or meaning in spoken language.
Stable representations of speech melody
The study revealed that, despite the natural variation in pitch every time a person speaks, the brain forms stable representations of these patterns to aid understanding. This suggests that prosodic information is extracted and processed much earlier in the auditory system than previously thought.
Additional experiments with non-human primates showed that, while their brains processed the same acoustic cues, they lacked the ability to abstract pitch accents in the same way humans do. This underscores the unique role of linguistic experience in shaping speech perception.
Implications for speech disorders and artificial intelligence
Understanding how the brain deciphers prosody could have significant implications for speech and language disorders. These findings may contribute to the development of new interventions for individuals with conditions such as autism, stroke-related dysprosody and language-based learning differences.
Dysprosody
A speech disorder that affects the normal rhythm, pitch and intonation of speech, often occurring after brain injury or stroke.
Additionally, the study highlights the potential for improving AI-driven speech recognition systems. By incorporating early prosodic processing into AI models, developers could create voice assistants that more accurately interpret speech intent and natural human communication patterns.
Reference: Gnanateja GN, Rupp K, Llanos F, et al. Cortical processing of discrete prosodic patterns in continuous speech. Nat Commun. 2025;16(1):1947. doi: 10.1038/s41467-025-56779-w
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.
This content includes text that has been generated with the assistance of AI. Technology Networks' AI policy can be found here.