We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
How the Brain Can Identify One Voice in a Noisy Crowd
News

How the Brain Can Identify One Voice in a Noisy Crowd

How the Brain Can Identify One Voice in a Noisy Crowd
News

How the Brain Can Identify One Voice in a Noisy Crowd

Credit: RODRIGO GONZALEZ/ Unsplash
Read time:
 

Want a FREE PDF version of This News Story?

Complete the form below and we will email you a PDF version of "How the Brain Can Identify One Voice in a Noisy Crowd"

First Name*
Last Name*
Email Address*
Country*
Company Type*
Job Function*
Would you like to receive further email communication from Technology Networks?

Technology Networks Ltd. needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy

In a crowded room where many people are talking, such as a family birthday party or busy restaurant, our brains have the ability to focus our attention on a single speaker.  Understanding this scenario and how the brain processes stimuli like speech, language, and music has been the research focus of Edmund Lalor, Ph.D., associate professor of Neuroscience and Biomedical Engineering at the University of Rochester Medical Center.


Recently, his lab found a new clue into how the brain is able to unpack this information and intentionally hear one speaker, while weaning out or ignoring a different speaker. The brain is actually taking an extra step to understand the words coming from the speaker being listened to, and not taking that step with the other words swirling around the conversation. “Our findings suggest that the acoustics of both the attended story and the unattended or ignored story are processed similarly,” said Lalor. “But we found there was a clear distinction between what happened next in the brain.”


For this study, recently published in The Journal of Neuroscience, participants simultaneously listened to two stories, but were asked to focus their attention on only one. Using EEG brainwave recordings, the researchers found the story that participants were instructed to pay attention to was converted into linguistic units known as phonemes – these are units of sound that can distinguish one word from another – while the other story was not. “That conversion is the first step towards understanding the attended story,” Lalor said. “Sounds need to be recognized as corresponding to specific linguistic categories like phonemes and syllables, so that we can ultimately determine what words are being spoken – even if they sound different -- for example, spoken by people with different accents or different voice pitches.” Co-authors on this paper include Rochester graduate student Farhin Ahmed, and Emily Teoh of Trinity College, University of Dublin. The research was funded by the Science Foundation of Ireland, Irish Research Council Government of Ireland, Del Monte Institute for Neuroscience Pilot Program, and the National Institute on Deafness and Other Communication Disorders.


“Receiving this prize is a great honor in two ways. First, it is generally very nice to be recognized by one's peers for having produced valuable and impactful work. This community is made up of neuroscientists and engineers, so to be recognized by them is very gratifying,” Lalor said. “And, second, it is a great honor to be connected to Misha Mashowald - who was a pioneer in the field of neuromorphic engineering and who passed away far too young.”This work is a continuation of a 2015 study lead by Lalor that was published in the journal Cerebral Cortex. That research was recently awarded the 2021 Misha Mahowald Prize for Neuromorphic Engineering for its impact on technology aimed at helping disabled humans improve sensory and motor interaction with the world, like developing better wearable devices, e.g. hearing aids. The research originated at the 2012 Telluride Neuromorphic Engineering Cognition Workshop and led to the multi-partner institution Cognitively Controlled Hearing Aid project funded by the European Union, which successfully demonstrated a real-time Auditory Attention Decoding system.  


John Foxe, Ph.D., director of the Del Monte Institute for Neuroscience was a co-author on this study that showed it was possible to use EEG brainwave signals to determine who a person was paying attention to in a multi-speaker environment. It was novel work in that it went beyond the standard approach of looking at effects on average brain signals. “Our research showed that – almost in real time – we could decode signals to accurately figure out who you were paying attention to,” said Lalor.  


Reference: Teoh ES, Ahmed F, Lalor EC. Attention differentially affects acoustic and phonetic feature encoding in a multispeaker environment. J Neurosci. 2021. doi: 10.1523/JNEUROSCI.1455-20.2021


This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.


Advertisement