We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

What Music Are You Listening To? Your Brainwaves Encode the Answer

Waves in red, blue and yellow run horizontally over an illustration of the human head and brain.
Credit: Gerd Altmann/ Pixabay
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

Researchers at the University of Essex hope the project could lead to helping people with severe communication disabilities such as locked-in syndrome or stroke sufferers by decoding language signals within their brains through non-invasive techniques.


Dr Ian Daly, from Essex’s School of Computer Science and Electronic Engineering who led the research, said: “This method has many potential applications. We have shown we can decode music, which suggests that we may, one day, be able to decode language from the brain.”


Essex scientists wanted to find a less invasive way of decoding acoustic information from signals in the brain to identify and reconstruct a piece of music someone was listening to.

Want more breaking news?

Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.

Subscribe for FREE

Whilst there have been successful previous studies monitoring and reconstructing acoustic information from brain waves, many have used more invasive methods such as electrocortiography (ECoG) - which involves placing electrodes inside the skull to monitor the actual surface of the brain.


The research, published in the journal Scientific Reports, used a combination of two non-invasive methods - fMRI, which measures blood flow through the entire brain, and electroencephalogram (EEG), which measures what is happening in the brain in real time - to monitor a person’s brain activity whilst listening to a piece of music. Using a deep learning neural network model, the data was translated to reconstruct and identify the piece of music.


Music is a complex acoustic signal, sharing many similarities with natural language, so the model could potentially be adapted to translate speech. The eventual goal of this strand of research would be to translate thought, which could offer an important aid in the future for people who struggle to communicate, such as those with locked-in syndrome.


Dr Daly said “One application is brain-computer interfacing (BCI), which provides a communication channel directly between the brain and a computer. Obviously, this is a long way off but eventually we hope that if we can successfully decode language, we can use this to build communication aids, which is another important step towards the ultimate aim of BCI research and could, one day, provide a lifeline for people with severe communication disabilities.”


The research involved the re-use of fMRI and EEG data collected, originally, as part of a previous project at the University of Reading from participants listening to a series of 40-second pieces of simple piano music from a set of 36 pieces which differed in tempo, pitch harmony and rhythm. Using these combined data sets, the model was able to accurately identify the piece of music with a success rate of 71.8%.


Reference: Daly I. Neural decoding of music from the EEG. Sci Rep. 2023;13(1):624. doi: 10.1038/s41598-022-27361-x


This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.