We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Does the Brain Learn Like a Computer Learns?

Does the Brain Learn Like a Computer Learns? content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 2 minutes

Pinpointing how neural activity changes with learning is anything but black and white. Recently, some have posited that learning in the brain, or biological learning, can be thought of in terms of optimization, which is how learning occurs in artificial networks like computers or robots. A new perspectives piece co-authored by Carnegie Mellon University and University of Pittsburgh researchers relates machine learning to biological learning, showing that the two approaches aren’t interchangeable, yet can be harnessed to offer valuable insights into how the brain works.

“How we quantify the changes we see in the brain and in a subject’s behavior during learning is ever-evolving,” says Byron Yu, professor of biomedical engineering and electrical and computer engineering. “It turns out that in machine learning and artificial intelligence, there is a well-developed framework in which something learns, known as optimization. We and others in the field have been thinking about how the brain learns in comparison to this framework, which was developed to train artificial agents to learn.”

The optimization viewpoint suggests that activity in the brain should change during learning in a mathematically prescribed way, akin to how the activity of artificial neurons changes in a specific way when they are trained to drive a robot or play chess.

“One thing we are interested in understanding is how the learning process unfolds over time, not just looking at a snapshot of before and after learning occurs,” explains Jay Hennig, a recent Ph.D. graduate in neural computation and machine learning at Carnegie Mellon. “In this perspectives piece, we offer three main takeaways that would be important for people to consider in the context of thinking about why neural activity might change throughout learning that cannot be readily explained in terms of optimization.”

The takeaways include i) the inflexibility of neural variability throughout learning, ii) the use of multiple learning processes even during simple tasks, and iii) the presence of large task-nonspecific activity changes.

“It’s tempting to draw from successful examples of artificial learning agents and assume the brain must do whatever they do,” suggests Aaron Batista, professor of bioengineering at the University of Pittsburgh. “However, one specific difference between artificial and biological learning systems is the artificial system usually does just one thing and does it really well. Activity in the brain is quite different, with many processes happening at the same time. We and others have observed that there are things happening in the brain that machine learning models cannot yet account for.”

Steve Chase, professor of biomedical engineering at Carnegie Mellon and the Neuroscience Institute adds, “We see a theme building and a direction for the future. By drawing attention to these areas where neuroscience can inform machine learning and vice versa, we aim to connect them to the optimization view to ultimately understand, on a deeper level, how learning unfolds in the brain.”

Reference:

Hennig JA, Oby ER, Losey DM, Batista AP, Yu BM, Chase SM. How learning unfolds in the brain: toward an optimization view. Neuron. Published online October 13, 2021. doi:10.1016/j.neuron.2021.09.005

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.