We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

“Time-Stamps” Help the Brain Decode Speech

Two faces talking with speech bubbles inside them.
Credit: Technology Networks
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 2 minutes

We take our ability to understand human speech for granted. But our innate skill at decoding the complex and varied sounds we use to communicate remains something of a mystery to scientists studying the brain.


A new research paper has provided some answers, showing that the brain “time-stamps” sounds arriving at the ear. This ability, in combination with discrete neuronal populations capable of identifying different types of sounds, explains how we perceive the “what” and “when” of speech. The new findings were published in the journal Nature Communications.

How the brain processes speech

The study was led by Dr. Laura Gwilliams, a postdoctoral fellow at the University of California, San Francisco. “To understand speech, your brain needs to accurately interpret both the speech sounds’ identity and the order that they were uttered to correctly recognize the words being said,” explained Gwilliams in a press release.


“We show how the brain achieves this feat: different sounds are responded to with different neural populations. And each sound is time-stamped with how much time has gone by since it entered the ear. This allows the listener to know both the order and the identity of the sounds that someone is saying to correctly figure out what words the person is saying.”


The brain’s ability to process single sounds has been somewhat demystified by years of careful study. But human speech bursts forth in a wild torrent of varying speed, noise level and accent. If we can better understand how the brain tackles these speedy sequences, we might also better understand neurological diseases, like aphasia, that affect the brain’s ability to process speech.

Telling words apart

Gwilliams and colleagues recorded the brain activity of 21 participants while they listened to audiobook recordings in their native English for two hours. The researchers focused in on activity that appeared in response to individual speech sounds, called phonemes, that help us tell words apart, such as the “p” and “t” sounds that separate “pip” from “pit”.


The brain needs to be able to process phonemes both at high speed and in the order they were heard (otherwise “pit” could be mistaken as “tip”). The researchers found that the brain continuously processes the three most recently heard speech sounds together, passing activity among neurons found in the auditory cortex to avoid a pile-up of information. The brain activity representing each phoneme changes over time. These changes are time-stamped by the brain to allow the temporal sequence of the sound to be identified. 


“We found that each speech sound initiates a cascade of neurons firing in different places in the auditory cortex. This means that the information about each individual sound in the phonetic word ‘k-a-t’ gets passed between different neural populations in a predictable way, which serves to time-stamp each sound with its relative order,” concluded Gwilliams.


ReferenceGwilliams L, King JR, Marantz A, Poeppel D. Neural dynamics of phoneme sequences reveal position-invariant code for content and order. Nat Comms. 2022;13(1):6606. doi:10.1038/s41467-022-34326-1


This article is a rework of a press release issued by NYU. Material has been edited for length and content.