We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Algorithm Reads Your Mind to Interpret Brain Representation of Faces

Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 3 minutes

Friends, family, colleagues, acquaintances—how does the brain process and recognize the myriad faces we see each day? New research from Caltech shows that the brain uses a simple and elegant mechanism to represent facial identity. The findings suggest a not-too-distant future in which monitoring brain activity can lead to a reconstruction of what a person is seeing.

The work was done in the laboratory of Doris Tsao (BS '96), professor of biology, leadership chair and director of the Tianqiao and Chrissy Chen Center for Systems Neuroscience, and Howard Hughes Medical Institute (HHMI) Investigator. A paper describing the work appears in the June 1 issue of the journal Cell.

The central insight of the new work is that even though there exist an infinite number of different possible faces, our brain needs only about 200 neurons to uniquely encode any face, with each neuron encoding a specific dimension, or axis, of facial variability. In the same way that red, blue, and green light combine in different ways to create every possible color on the spectrum, these 200 neurons can combine in different ways to encode every possible face—a spectrum of faces called the face space.

Some of these neurons encode aspects of the skeletal shape of the face—for example, the distance between the eyes, the shape of the hairline, or the width of the face. Others encode features of the face that are independent of its shape, such as the complexion, the musculature, or the color of the eyes and hair. Furthermore, the response of neurons is proportional to the strength of these features; for example, a neuron might show its strongest response to a large inter-eye distance, an intermediate response to an average inter-eye distance, and a minimal response to a small inter-eye distance. However, single neurons are not mapped onto specific nameable features. Instead each neuron codes a more abstract "direction in face space" that combines different elementary features. By measuring where a face lies along each of these different directions, the brain can then perceive the identity of the face.

"This new study represents the culmination of almost two decades of research trying to crack the code of facial identity," says Tsao. "It's very exciting because our results show that this code is actually very simple."

In 2003, Tsao and her collaborators discovered that certain regions in the primate brain are most active when a monkey is viewing a face. The researchers dubbed these regions face patches; the neurons inside, they called face cells. Research over the past decade had revealed that different cells within these patches respond to different facial characteristics. For example, some cells respond only to faces with eyes while others respond only to faces with hair.

"But these results were unsatisfying, as we were observing only a shadow of what each cell was truly encoding about faces," says Tsao. "For example, we would change the shape of the eyes in a cartoon face and find that some cells would be sensitive to this change. But cells could be sensitive to many other changes that we hadn't tested. Now, by characterizing the full selectivity of cells to faces drawn from a realistic face space, we have discovered the full code for realistic facial identity."

Two clinching pieces of evidence prove that the researchers have cracked the full code for facial identity. First, once they knew what axis each cell encoded, the researchers were then able to develop an algorithm that could decode additional faces from neural responses. In other words, they could show a monkey a new face, measure the electrical activity of face cells in the brain, and recreate the face that the monkey was seeing with high accuracy.

Second, the researchers theorized that if each cell was indeed responsible for coding only a single axis in face space, each cell should respond exactly the same way to an infinite number of faces that look extremely different but all have the same projection on this cell's preferred axis. Indeed, Tsao and Le Chang, postdoctoral scholar and first author on the Cell paper, found this to be true.

"In linear algebra, you learn that if you project a 50-dimensional vector space onto a one-dimensional subspace, this mapping has a 49-dimensional null space," Tsao says. "We were stunned that, deep in the brain's visual system, the neurons are actually doing simple linear algebra. Each cell is literally taking a 50-dimensional vector space—face space—and projecting it onto a one-dimensional subspace. It was a revelation to see that each cell indeed has a 49-dimensional null space; this completely overturns the long-standing idea that single face cells are coding specific facial identities. Instead, what we've found is that these cells are beautifully simple linear projection machines."

"Our results could suggest new machine-learning algorithms for recognizing faces and providing new tasks to train networks with," adds Chang. "It gives us a model for understanding how objects in general are coded within a large brain region. One can also imagine applications in forensics where one could reconstruct the face of a criminal by analyzing a witness's brain activity."

Reference:
Chang, L. and Tsao, D. (2017). The Code for Facial Identity in the Primate Brain. Cell, 169(6), pp.1013-1028.e14.

This article has been republished from materials provided by Caltech. Note: material may have been edited for length and content. For further information, please contact the cited source.