I See What You're Thinking: Successful Image Reconstruction Using EEG
Dan Nemrodov (left) and Professor Adrian Nestor (center) have developed a technique that can harness brain waves gathered by data to show how our brains perceive images of faces Credit: Ken Jones
A new technique developed by neuroscientists at the University of Toronto Scarborough can, for the first time, reconstruct images of what people perceive based on their brain activity gathered by EEG.
The technique developed by Dan Nemrodov, a postdoctoral fellow in Assistant Professor Adrian Nestor’s lab at U of T Scarborough, can digitally reconstruct images seen by test subjects based on electroencephalography (EEG) data.
“When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process,” says Nemrodov.
For the study, test subjects hooked up to EEG equipment were shown images of faces. Their brain activity was recorded and then used to digitally recreate the image in the subject’s mind using a technique based on machine learning algorithms.
It’s not the first time researchers have been able to reconstruct images based on visual stimuli using neuroimaging techniques. The current method was pioneered by Nestor who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past, but this is the first time EEG has been used.
And while techniques like fMRI – which measures brain activity by detecting changes in blood flow – can grab finer details of what’s going on in specific areas of the brain, EEG has greater practical potential given that it’s more common, portable, and inexpensive by comparison. EEG also has greater temporal resolution, meaning it can measure with detail how a percept develops in time right down to milliseconds, explains Nemrodov.
“fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale. So we can see with very fine detail how the percept of a face develops in our brain using EEG,” he says. In fact, the researchers were able to estimate that it takes our brain about 170 milliseconds (0.17 seconds) to form a good representation of a face we see.
This study provides validation that EEG has potential for this type of image reconstruction notes Nemrodov, something many researchers doubted was possible given its apparent limitations. Using EEG data for image reconstruction has great theoretical and practical potential from a neurotechnological standpoint, especially since it’s relatively inexpensive and portable.
In terms of next steps, work is currently underway in Nestor’s lab to test how image reconstruction based on EEG data could be done using memory and applied to a wider range of objects beyond faces. But it could eventually have wide-ranging clinical applications as well.
“It could provide a means of communication for people who are unable to verbally communicate. Not only could it produce a neural-based reconstruction of what a person is perceiving, but also of what they remember and imagine, of what they want to express,” says Nestor.
“It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects rather than relying on verbal descriptions provided to a sketch artist.”
The research, which is published in the journal eNeuro, was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by a Connaught New Researcher Award.
“What’s really exciting is that we’re not reconstructing squares and triangles but actual images of a person’s face, and that involves a lot of fine-grained visual detail,” adds Nestor.
“The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities. It unveils the subjective content of our mind and it provides a way to access, explore and share the content of our perception, memory and imagination.”
Changing Lanes: Algorithm Helps AI Drive More Like HumansNews
For self-driving cars, algorithms for changing lanes are beset by one of two problems. Either, they rely on detailed statistical models of the driving environment, which are too complex to analyze on the fly; or they’re so simple that they can lead to impractically conservative decisions, such as never changing lanes at all. Now a new algorithm hopes to split the difference, allowing aggressive lane changes than the simple models do but relies only on immediate information about other vehicles’ directions and velocities to make decisions.
Researchers Identify Gene That Helps Prevent Brain DiseaseNews
Protein ‘proofreading’ errors lead to neurodegenerative disease.READ MORE
Leg Exercise is Critical to Brain and Nervous System HealthNews
New research shows that using the legs, particularly in weight-bearing exercise, sends signals to the brain that are vital for the production of healthy neural cells.READ MORE
Comments | 0 ADD COMMENT
2nd Annual Artificial Intelligence in Drug Development Congress
Sep 20 - Sep 21, 2018