We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Privacy in the Brain: The Ethics of Neurotechnology

Privacy in the Brain: The Ethics of Neurotechnology content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 9 minutes

A monkey playing ping pong is not your classic ethical "dilemma". But watching Pager, the nine-year-old macaque, bat a ball from side to side, is a view into a moral minefield.

Pager isn’t physically moving the ball. Instead, as part of a trial by Elon Musk-run neurotechnology company Neuralink, he is playing the arcade game  Pong on a screen, for which he is rewarded with a banana smoothie in return. He’s also not using a joystick. Rather, a pair of Neuralink implants are translating signals from Pager’s motor cortex into onscreen movements of a Pong paddle.

If you are a tech nerd, the kind that Musk is unashamedly targeting for recruitment into his growing company, you might marvel at the features of the device, a 1,024-channel electrode dubbed the "N1 Link" that even boasts Bluetooth control via mobile phone.

If you are an animal lover, you might feel unsettled, noting that Pager is conducting his tests on a fake tree in front of a projection of his natural forest home.

Most people probably feel a combination of these emotions, like how you might be both horrified and awestruck by a particularly engrossing episode of Black Mirror. This technology is not only surreal but also likely to be coming to a hospital (or tech store) near you very soon.

What is neurotechnology?


The Link is a type of neurotechnology, a broad term that refers to the field of science that marries electronic components to a nervous system. Musk’s Neuralink represents one of the most well-funded and, perhaps unsurprisingly for a man who sent a car into space, eye-catching examples of neurotechnology.

But these devices have been quietly worked on for decades. Giles Brindley, a University of Cambridge physiologist, produced a brain implant that could wirelessly stimulate the visual cortex way back in 1965. This was developed as a visual prosthesis and the phosphenes that the implant generated (irregular flashes of light that appear in the visual field) enabled the user to identify a few letters of the alphabet. While being of little practical use at the time, Brindley’s experiments established the moral basis for neurotechnology – these devices’ worth would surely be undeniable if they could bring sight to the blind and voice to the voiceless. Fifty six years after Brindley’s first publication, a new study by researchers at the University of California San Francisco (UCSF) took a leap towards that latter goal.

The BRAVO (Brain-Computer Interface Restoration of Arm and Voice) study saw UCSF neurosurgeon Edward Chang and colleagues develop a device that picked up neural signals intended for the vocal tract. Chang surgically implanted this device into a patient, dubbed BRAVO1, who had suffered a stroke more than a decade prior that had robbed him of most of his movement and his voice. The implant was fitted above BRAVO1’s motor cortex and, after years of painstaking communication via a pointer device and a touchscreen, BRAVO1’s implant was able to help him construct sentences using only the power of his mind.

Musk’s Neuralink also wants to help restore the senses of disabled people. He is also open about his vision for the end goal for the technology – to see these brain implants mass marketed to the general public, used to not only cure diseases but to enhance the healthy human brain. Who’d want the job of weighing up the costs and benefits?

Enter the ethicists


Marcello Ienca is a professor of bioethics at Swiss university ETH Zurich. Ienca's current grant, in partnership with German and Canadian researchers, is to investigate the interaction between human and artificial cognition via neural interfaces. Ienca recently spoke at a two-day neurotechnology workshop arranged by the Organization for Economic Co-operation and Development (OECD) that focused on how this nascent technology will interact with society.

“When I started working in this field at the beginning of the previous decade, there were only a handful of tech companies involved in this domain,” says Ienca. Now, things have gotten a bit more complicated. Neurotechnology devices are broadly split into invasive and non-invasive categories.

Invasive technologies, like Neuralink, implant electrodes and other hardware directly onto, or into, the brain. From here they can record or input electrical signals from specific brain regions. The region targeted will vary depending on the aim of the device. Want to feed signals from a camera to bypass a non-functional retina? Aim for the visual cortex. Want to give relief to the gait-disrupting tremors experienced by people with Parkinson’s disease? A deep brain stimulation electrode releasing signals onto the subthalamic nucleus (STN) or globus pallidus internus (GPi) could do the trick.

Non-invasive technologies bypass surgery altogether and make their recordings from the surface of the scalp. Aside from applying gel used to improve the signal connection, these techniques require very little of the user – a huge benefit compared to invasive technologies, but struggle to
achieve the level of resolution or effectiveness of invasive tech, as the scalp and skull can muddy the picture.

Non-invasive technologies, which bypass the not-so-slight hurdle of brain surgery, have opened up neurotechnology to a consumer market. Devices such as the Flow Neuroscience tDCS brain stimulator and the Muse headband, aiming to reduce stress or improve meditation through brain stimulation, have tapped into the “wellness” market that is in overdrive.

Neurotechnology’s glow-up


Anna Wexler, an assistant professor of medical ethics and health policy at the University of Pennsylvania, has focused on these direct-to-consumer devices, which not so long ago, were mainly the preserve of DIY brain hackers. “What has changed in the last few years is that we’ve been seeing more investment, from venture capitalists and others, in these devices. So, these devices have really gone from things that people would create in their basements or home garages to sleeker products with well-thought-out engineering and design components,” Wexler tells Technology Networks.



The sudden glow-up that non-invasive neurotechnology has gone through has caught regulators by surprise. Wexler, who wrote in 2015 about the regulatory challenges that these devices would face, still believes that marketing brain stimulation devices as “wellness” products have allowed some companies to skip strict regulations. Is a device that makes only vague medical claims about mood-boosting liable to the strict laws of the Food and Drug Administration (FDA) or should they be addressed by consumer product agencies? The FDA, Wexler notes, has not taken action against smaller neurotechnology companies that have made explicit medical claims about their devices.

The key ethical issues facing currently available at-home devices are questions of harm, which can be viewed in two different ways. “The first way is as an immediate adverse reaction that is measurable, like a  burn on the skin beneath where an electrode was placed,” says Wexler. While users on online neurotechnology forums, like Reddit’s r/tDCS, a community for users of transcranial direct current stimulation devices, have reported burns from self-designed devices, some leading neurotechnology devices hardcode in strict limitations to how much current can be put through their headbands. The second issue is more fundamental, and harder to assess, Wexler says: “The second part of the harm issues is unintended negative consequences outside of safety, so things like potential effects on cognition if someone uses the device very frequently.” While the presence or absence of a charred chunk of hair after an excessive round of unregulated stimulation is easily quantifiable, assessing how someone’s subjective experience of reality might be altered by devices – especially those that explicitly aim to help people by tweaking their mood – is hard to measure, and there have not yet been any conclusive studies on this, Wexler explains.

The human right to brain privacy


In the face of a rapidly changing field, this is an oversight that Ienca and the OECD, through their recent Recommendation on Responsible Innovation in Neurotechnology, have aimed to rectify. In Ienca’s view, this is a goal that needs urgent attention. “It's possible to predict that in the decade that just began, we will see neurotechnology become a mainstream technology, something that is not used only by a handful of innovators, but by a large chunk of the world’s population,” he says.

The ethical issues that face Wexler’s at-home devices today will likely be a walk in the park compared to those that will emerge in the near future. If Musk can meet his promise that the Neuralink device could one day help users replay memories or even achieve “superhuman cognition”, ethical issues that are both unique and greater in magnitude than those raised by any other form of technology could appear. “This is because the human brain is not just another organ, it is the fundamental biological substrate of mental faculties such as consciousness, memory, language, perception, emotions and so on,” says Ienca.

“All those are things that make us human and therefore addressing the ethics of incorporating brains and machines requires fundamental ethical analysis of notions such as personal identity.” To Ienca, this is a fundamental matter of human rights, about what it means to think for oneself.

Who are we protecting our rights from? The growing neurotechnology market has attracted a series of big tech companies. This includes not only Neuralink, but companies that made their fortune in social media, such as Facebook.

To Ienca, social media represents the “what not to do” of tech regulation: “I really think that ethics was not a priority among social media actors,” he says. Ienca says that social media companies focused on innovation and return on investment and only considered ethics after numerous privacy scandals forced them to act. Social media companies, who make almost all of their revenue from selling access to their users’ data to advertisers, exist in an uneasy moral gray area, explains Ienca. “What we see with social media is not that data are stolen from people explicitly, but that people are sharing their data under a weak consent regime and have a very limited awareness about the kind of inferences that can be made based on their data.”

Ienca calls this tactic implicit coercion. “We have consent, but we do not have informed consent, and for some reason, we have accepted that consent without informed consent is OK, socially acceptable and legally justifiable,” he says.

Would users of a brain-computer interface be willing to pay less for their device in exchange for their neural and cognitive data being made available to advertisers? Ienca is skeptical of the idea that a similar approach will work for neurotechnology. Instead, he believes privacy paradigms are required.

“We need to improve our models of consent. We need to make opt-in, affirmative consent mandatory for all consumer technology applications. You cannot just presume consent from your end users,” says Ienca.

Who regulates neurotechnology?


What’s encouraging to hear from the ethicist is that his worries are not just confined to the ivory tower of academia. International organizations are getting involved – Ienca recently completed a report for the Council of Europe, which has launched a five-year strategic action plan on the ethics of biomedicine that includes a chapter on neurotechnology, while governments in Chile and Spain have moved towards codifying the right to brain data privacy in their laws.

The OECD, rather than providing a set of hard, top-down rules, is attempting to help neurotechnology companies self-regulate. Can we really trust these companies to police themselves? Ienca is hopeful. “A lot of companies in these fields are quite committed to innovating responsibly and to incorporating ethics into their business model, and I think this is a big achievement because it's rather unusual in the history of technology that ethical considerations are incorporated early on in the design and development phase.” He highlights what he sees as a commitment to neuroethics from industry players, even Facebook Reality Labs, who collaborated with Chang on his speech-restoring implant. This is a strong statement of support coming from Ienca, who is an extremely vocal critic of Facebook’s data ethics on their social media platforms.

What does a private future for neurotechnology look like?


This self-regulation might involve beefed-up consent regimes and terms of use that can be understood by your average user – another innovation that would improve greatly on the standard set by social media companies. Finally, education and awareness among users are a priority. “For obvious reasons, people have very little knowledge about how valuable their data are, let alone about how valuable their brain data are. It's very important that we disseminate information to empower everyone to make free and competent decisions about their mental space,” says Ienca.

This may all seem like an uncertain and tense future for neurotechnologies that could come to dominate our digital lives, but Ienca is keen to emphasize that these negative ethical concerns are only part of a wider picture. This includes technologies such as Chang’s, and even non-invasive technologies targeting mood disorders – all innovations that aim to help people suffering from chronic conditions that have crushing impacts on their quality of life.

“We have to keep in mind that neuropsychiatric disorders are a major component of the global burden of disease, so we have a moral obligation to accelerate innovation in this domain, especially considering that our current therapeutic solutions through pharmacological therapy are relatively limited,” says Ienca.

This is a dimension of Ienca’s and Wexler’s field that sometimes gets overlooked, but that is relevant to all ethics, not just those of buzzing devices for our brains. “I keep telling my students [that] ethics is not all about things that go wrong. It's also about how to maximize human wellbeing, and by connecting artificial intelligence to the human brain, we can develop better therapeutic, preventative and diagnostic solutions for people in need,” concludes Ienca.