We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
The Human Brain Separates Sounds Without Even Listening
News

The Human Brain Separates Sounds Without Even Listening

The Human Brain Separates Sounds Without Even Listening
News

The Human Brain Separates Sounds Without Even Listening

Read time:
 

Want a FREE PDF version of This News Story?

Complete the form below and we will email you a PDF version of "The Human Brain Separates Sounds Without Even Listening"

First Name*
Last Name*
Email Address*
Country*
Company Type*
Job Function*
Would you like to receive further email communication from Technology Networks?

Technology Networks Ltd. needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy

Neurobiologists from HSE University and the RAS Institute of Higher Nervous Activity and Neurophysiology proved that the human brain unconsciously distinguishes between even very similar sound signals during passive listening. The study was published in Neuropsychologia.


Our auditory system is able to detect sounds at an implicit level. The brain can distinguish between even very similar sounds, but we do not always recognise these differences. The researchers demonstrated this in their study dedicated to sound perception during passive listening (when the subject is not trying to explicitly hear the differences).


To investigate this, the researchers carried out an experiment with 20 healthy volunteers. The participants listened to sounds while the researchers used electroencephalography (EEG) to measure their brain responses to the stimuli. The sounds were so similar that the participants could only explicitly distinguish them with 40% accuracy.


First, the volunteers listened to sequences of three sounds in which one sound was repeated often, while the two others appeared rarely. The participants were asked to press a key if they heard a difference in the sounds. Then, in passive listening mode, the same sounds appeared in more elaborate sequences: groups of five similar sounds and groups in which the fifth sound was different.


Two types of sound sequences were used in the experiment: those with local irregularities and those with global irregularities. In the first type, groups of similar sounds were often repeated, while a group with a different sound at the end appeared randomly and rarely. In the second type, groups with a different sound at the end appeared often and groups of similar sounds appeared rarely.


Detecting these two types of sound sequences requires attention at different levels. The brain reacts differently to them, and EEG registers different types of potentials. Local irregularity can be detected without explicit attention and elicits mismatch negativity (MMN) and P3a potentials. Global irregularity demands concentration and elicits P3b potential, which reflects a higher level of consciousness. The same potentials were registered in earlier experiments with the same methodology. The difference with the current study by researchers from the HSE Institute for Cognitive Neuroscience and the RAS Institute of Higher Nervous Activity and Neurophysiology is that they used sounds that are barely distinguishable. In earlier studies, stimuli (sounds or images) could be recognized with 100% accuracy.


The appearance of N400 potential confirms an existing theory that explains how consciousness works. According to the theory of predictive coding, the brain creates a model of the environment based on its experience and uses predictions to optimize its operations. When faced with experiences that contradict these predictions, its world outlook is updated. This process forms the basis of implicit (unconscious) learning and is related to the aim of minimizing prediction errors to enable better adaptation and faster reaction to changes in the environment.


The results of the study are important for fundamental science (since they prove the predicting coding model) and have possible applications in clinical studies. For example, P3b and N400 potentials can be used to evaluate the consciousness level of patients who are unable to explicitly react to stimuli (such as in cases of Alzheimer’s disease, Parkinson’s disease, comas etc).


Reference: Liaukovich K, Ukraintseva Y, Martynova O. Implicit auditory perception of local and global irregularities in passive listening condition. Neuropsychologia. 2022;165:108129. doi:10.1016/j.neuropsychologia.2021.108129


This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

Advertisement