We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

A Second Chance at Sight? Developments in Vision-Inducing Brain Prosthetics

Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 4 minutes

Picture what you did yesterday. Chances are you got up, got ready for the day, and spent most of your time working. Maybe you wrapped things up with a good meal, a workout or a socially distant happy hour with friends. Can you see it? Now imagine that same day, but without the ability to see.

Vision is an integral part of human daily life. While other mammals (think dogs or rodents) utilize whisker stimulation or a keen sense of smell, humans depend primarily on sight to navigate the world.

Most blind individuals who were not born blind have damage to their eyes or optic nerves, meaning the portions of the brain that compute vision are left completely intact, but with limited or no input.

Stretching back to the 1960’s, scientists have long dreamed of a prosthetic that would deliver video input to the brain, skipping over the defective eyes or optic nerves. If scientists could format the video input in a way for the brain understand, the rest of the neural circuitry would be able to take the video input and then carry out the essential computations for vision.

A new study from multiple labs, including senior author Daniel Yoshor’s group at Baylor College of Medicine in Houston, takes a significant first step towards this dream prosthetic by using a novel brain stimulation technique to elicit visual images in both sighted and blind participants.

How’d they do it?


The theory behind early visual prosthetics equated the visual system to a computer screen, where each point of a person’s visual space is comparable to one pixel on a computer. So, to display the letter “z,” the neurons (or pixels) representing the points on the letter “z” need to be stimulated. Previous studies have done this by stimulating the brain with chronically implanted electrodes, but with limited outcomes.

As technological advances have allowed, however, scientists have deduced that this pixel-vision analogy doesn’t seem to translate with real visual prosthetics. The lost-in-translation reality is likely due to our limited understanding of how the world around is mapped in our brain. Yoshor and colleagues instead used a novel stimulation technique, a crucial methodological approach for the study’s success.

Termed “dynamic current steering,” the authors equate this novel method to tracing letters on a person’s palm. Rather than stimulating all necessary points at once, they surmised that the sequential stimulation of drawing a letter would make the visual input more recognizable. The sequential stimulation of occipital lobe electrodes constitutes the “dynamic” portion of “dynamic current steering”.

So what does “current steering” mean? The research group was able to convey a variety of distinct letters from the English alphabet to participants, even though some participants had as few as 8 electrodes implanted. How can a distinct number of letters be created from so few stimulation points?

“Current steering” refers to the method Yoshor and his collaborators used to create pseudo stimulation sites. If two nearby electrodes are stimulated with equal intensity, participants do not perceive two separate points. Instead, participants report seeing one visual image midway between the two points of space represented by the stimulated electrodes. These pseudo stimulation sites gave the group of scientists more flexibility to design projected letters, increasing the quality and quantity of letters displayed.

The location of the image can further be manipulated by disproportionally stimulating the two electrodes. If the stimulation intensity is not equal between the two sites, participants perceive a visual image closer to the high-stimulation-intensity electrode.

The authors combined current steering with the dynamic stimulation sequence to stimulate the occipital lobe of both sighted and blind participants. In both cases, participants were able to accurately perceive distinct letters of the alphabet.

Giving sight to the blind (and the sighted)


Researchers began by testing the novel stimulation method in sighted patients, who had electrodes implanted as a treatment for epilepsy. On each trial, electrodes were stimulated using a predetermined sequence to elicit visual images of distinct alphabetic letters. Participants were then asked to trace what they saw. The authors noted a high level of similarity between the intended shape and what subjects traced. Participants were then presented with a multiple-choice question: identify the letter that was just presented. Multiple-choice accuracy was quite high, at 66% (compared to a 25% chance rate). This suggests that study participants were largely seeing what scientists had intended for them to perceive.

With such encouraging results, Yoshor and his colleagues tested the same paradigm in blind participants. Again, specific electrodes were stimulated to simulate distinct letters of the alphabet. Researchers quantified the quality of the participants’ traces and found that a whopping 82% of letters were correctly traced. Participants were again asked to identify the presented letter using a multiple-choice format. Subjects correctly identified the letter with 93% accuracy (compared to a 20% chance rate).

The authors tested the same alphabetic letters in both sighted and blind patients using a static, rather than a dynamic, stimulation pattern (i.e. ignoring the sequential aspect of tracing a letter). Participants reported seeing only one large light, instead of a distinct letter, dramatically highlighting the importance of dynamic stimulation for visual prosthetics.

Reality check


The authors report robust accuracy scores using the dynamic current steering method, meaning participants were able to identify the visual images presented the majority of the time. Furthermore, participants were able to recognize and correctly report the letters without any training to acclimate to the stimulation.

Yoshor and his collaborators suggest that the same principle can be used to simulate other objects common to daily life, like faces, houses, or cars. The long-term goal is to use machine vision algorithms to dynamically signify objects in real life.

Technological advances have allowed scientists to make these significant strides, but the authors say there is still more to be done. Alternative methods of current delivery, like optogenetic or even non-invasive stimulation with focused ultrasound, may possibly improve these methods further.

Before this sci-fi-esque technology can be used in daily life, scientists need to devise a way to stimulate much more of the visual field. The current study stimulates only a handful out of the ~500,000 neurons located in the visual cortex, and the authors are stressing the importance of this critical next step.

Nevertheless, the field as a whole is largely revisiting these types of visual prosthetics. Yoshor and his collaborators note that four or more groups are currently working towards moving these prosthetic designs towards the clinic.