We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Merging With Machines: A Look at Emerging Neuroscience Technologies

Merging With Machines: A Look at Emerging Neuroscience Technologies content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 7 minutes

We are already cyborgs.” – Elon Musk

Billionaire entrepreneur and technophile Elon Musk is just one of the prominent futurists echoing the sentiment that we are on the verge of a full merger with machines. “It’s increasingly hard to tell where I end and where the computer begins,” states historian, professor, and New York Times bestselling author Yuval Noah Harari at his keynote address at the
Fast Company European Innovation Festival. “In the future, it is likely that the smartphone will not be separated from you at all. It may be embedded in our body or brain, constantly scanning your biometric data and your emotions.”

For now, your smartphone might sit in your pocket, but with an internet link you have instant access to the world’s wisdom and knowledge within arm’s reach. You can communicate with millions of people around the world in over a dozen different languages without ever learning those languages. You can track your biometric data and automatically receive performance feedback. You even unknowingly outsource your memories to the cloud, remembering less by knowing information than by knowing where to find that information—for example, when you forget directions, but remember how to use Google Maps to get you to your destination. In becoming symbiotic with machines, it is changing the very way we perceive and remember the world around us. The Internet is changing our brains.

For the most part, we still have to use our movements to control the machines that gives us these superhuman abilities. But emerging neurotechnologies are changing that. They are taking us from movement-controlled to mind-controlled machines, from machine extensions of ourselves to machines integrated into ourselves.

Multiphoton Microscopy eBook

Confocal microscopy has remained the gold standard technique for imaging both fixed and living samples for many years. Nevertheless, it has limitations, particularly in terms of how deep it can image inside a sample.

Download this eBook to discover more about MPM, including how it works and how it provides solutions to challenges in microscopy.

View eBook

Machines that know what you want

You can now control a computer or robotic limb with your thoughts. This is possible thanks to advancements in brain-machine interfaces (BMIs), which translate neuronal information into computer commands.

Although interest in BMIs has exploded in large part because of Musk’s secretive Neuralink project, research in this area started decades ago when UCLA professor Jacques Vidal posed the “BMI challenge” in 1973 – to control a graphical object using EEG signals. In 1977, he met his own challenge by using noninvasive EEG to move a cursor-like object through a maze on a computer screen.


Elon Musk's Neuralink project has brought BMIs into the public consciousness, despite having no peer-reviewed publications behind it. Credit: Neuralink


Fast forward to 2019, where BMIs now can sense what we want using information associated with reward expectation. In the lab of Joe Francis, Professor of Biomedical Engineering at the University of Houston, they are hard at work improving this technology.

“We have moved from simple reward/no-reward cases to much more complex environments with multiple levels of reward and even punishment,” says Francis.

Referring to one of their papers published last year, Francis reports that they “have found that reward expectation changes the motor representation of movement and of BMI control.” To illustrate, he gave me the example that if you were to reach for an ice cream cone vs. some garbage while connected to the BMI, it would record significantly different neural activity in these two situations, even though the movement is the same.

“The findings will be used for developing an autonomously updating and rapidly adaptive learning machine for BMIs to improve performance via reinforcement learning algorithms,” adds Junmo An, Research Assistant Professor at the University of Houston and a former postdoctoral researcher collaborating with Francis.

Utilizing reinforcement learning principles, the BMI can predict whether the user wants to reach for an object that delivers a rewarding outcome with extreme precision. This is because of the many ways reward expectation changes the electrical representation of movement in the primary motor cortex. “We developed near-perfect accurate classifiers (up to 97% classification accuracy) to predict reward outcome using the integrated features of power spectral density, of the local field potentials, and spike-field coherence,” says An.

How might this technology be used? In addition to helping paralyzed patients interact with the environment, Francis is optimistic about developing BMIs that “track the user’s psychological state towards therapy and behavioral modification.” In psychology, learning through reward and punishment, known as operant conditioning, is one of the three major kinds of learning. It’s possible that the new BMIs will take advantage of this learning modality to help us develop healthier habits and make better decisions.

An takes this concept to the next level by portraying a future where BMIs endow us with superhuman abilities and intelligence. “In the short run, BMIs could control home appliances (e.g., computers, cellphones, TVs, indoor lighting, etc.) and any kinds of machines and robots as well as playing video games. In the long run, people could learn rapidly any kinds of skills and abilities as [easily] as just downloading skill knowledge from the Internet as shown in science fiction movies, for example, The Matrix”.


Is The Matrix a look into the (distant) future of BMIs? Source: http://discovermagazine.com/~/media/Images/Issues/2013/March/matrix-2.jpg

Machines that record your brain without your awareness

Every moment of our life, our brains are producing biosignals that correspond with what we are thinking and how we are feeling. Until recently, neuroscientists have only been able to track these signals in artificial laboratory settings. But recent advances in unobtrusive wearable neurotechnologies are changing all that.

By tracking real-time recordings of the brain’s biosignals and combining that with synchronous recordings of what you are seeing and hearing, the new wearables will tell us more than we have ever known about how the human brain works in the real world. The potential applications are as wide as the imagination. For example, such devices could let us actively monitor our performance in work settings and feed that information to BMIs that adaptively tailor custom solutions to improve performance.

A team at the Neurophotonics Center at Boston University is one of several driving this movement, and they are solving some of the major challenges of taking neuroscience from the controlled laboratory to the noisy real world. Lying at the interface between optics and neuroscience, neurophotonics involves using light to peer inside the living brain.

Alexander von Lühmann, a postdoctoral researcher collaborating with David Boas, the founding director of the Neurophotonics Center and world leader in the field of neurophotonics, explains that in the last five years, there has been a great surge in interest in developing wearable neurotechnologies and getting them outside the lab, but one major challenge is how to detect clear signals in all the noise of everyday life. “There’s an increase in studies that are trying to get these technologies outside of the lab. There’s a lot of challenges to address, not only on the instrumentation side but also on the signal processing, artifact rejection, and signal analysis part,” says von Lühmann. von Lühmann is tackling this challenge by integrating functional near-infrared spectroscopy (fNIRS) and electroencephalogram (EEG) into wearable equipment.

EEG has the advantage of relatively precise temporal measurements because it measures the brain’s immediate electrical activity whereas fNIRS measures the oxygen levels of blood in the brain, which is delayed by a few seconds. But fNIRS has the upper hand in terms of spatial accuracy, and it is more robust to several measurement artifacts. “EEG picks up a lot of electrophysiological noise, eye movements, neck muscles, etc.,” von Lühmann explains. “With fNIRS you don’t have that kind of interference … there’s still some interference, we can tackle that.” By combining the technologies in one device, they can take advantage of each technology’s strengths.

Boas and von Lühmann are hopeful that the integration of fNIRS will help neurotechnology make the move from portable to wearable and even possibly unnoticeable. On the difference between portable and wearable, von Lühmann explains, “On the portable side you still have something chunky, but it’s not really tethered so you can wear it in a backpack, wearable goes more towards what you know from Fitbits or those more unobtrusive small devices that aim, at least in some point in the future, to provide measurements without really impacting natural behavior.”

Boas and von Lühmann have developed these technologies as part of a wider plan called “Neuroimaging in the Everyday World (NEW)”. NEW aims to create an unobtrusive wearable device that would provide a multi-modality overview of what is happening in your brain at any moment in your everyday life. You could be at work, talking with a friend, or having lunch, and this device would be recording a video and audio stream of what you’re seeing and hearing while monitoring your brain activity and biosignals including EEG, fNIRS, your head movements, and your eye movements. What you see would be automatically labelled with the aid of computer vision and a cloud-based image annotation service. “It will incorporate technologies that are already out there, but so far not yet combined very well,” says von Lühmann.



 A demonstration of integrated neurotechnologies. Credit: Alexander von Lühmann, Boston University


The combination of these technologies into a single device would open doors to neuroscience research and applications that are currently inaccessible. “You use computer vision to automatically annotate the visual stream … and you have maybe 6 hours or maybe 12 hours of brain data … maybe you had 20 people who ran around all day with this system, but now you’re interested in how the brain responds to specific activities – people go grocery shopping, buy a certain product or whatever, or see food, or interact with friends,” von Lühmann explains. “Wouldn’t it be great if you could do a context search in that continuous data and say, ‘I want all the segments in which there was personal interaction,’ and do that across a whole day and across several subjects?”

von Lühmann, who has a background in BMIs, also gives perspective on how the interfaces could advance by utilizing the new wearables, as BMIs also face the challenge of working properly when taken out of the laboratory and into the real world. Recent research in neuroergonomics suggests a passive application of BMIs which could use the new wearables “to assess mental or cognitive states and use that as support for systems that give feedback to the human, like for operator work load monitoring,” explains von Lühmann. Research into this area has already begun, with studies “where basically EEG and now more and more also fNIRS have been used to assess cognitive workload in air traffic controllers, in surgeons in training, in many different areas,” says von Lühmann. “That is in a way an extension of the original understanding [of BMIs] because it is not using the signals for an active control input [e.g., to control a robotic arm or wheelchair] but rather for monitoring and adapting the system to what the operator or person within that system needs.”