We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Computational Model Helps Understand How the Brain Processes Language

Computational Model Helps Understand How the Brain Processes Language  content piece image
Credit: Photo by jesse orrico on Unsplash https://unsplash.com/@jessedo81
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

Accelerating progress in neuroscience is helping us understand the big picture--how animals behave and which brain areas are involved in bringing about these behaviors--and also the small picture--how molecules, neurons, and synapses interact. But there is a huge gap of knowledge between these two scales, from the whole brain down to the neuron.

A team led by Christos Papadimitriou, the Donovan Family Professor of Computer Science at Columbia Engineering, proposes a new computational system to expand the understanding of the brain at an intermediate level, between neurons and cognitive phenomena such as language. The group, which includes computer scientists from Georgia Institute of Technology and a neuroscientist from the Graz University of Technology, has developed a brain architecture that is based on neuronal assemblies, and they demonstrate its use in the syntactic processing in the production of language; their model, published online June 9 in PNAS, is consistent with recent experimental results.

"For me, understanding the brain has always been a computational problem," says Papadimitriou, who became fascinated by the brain five years ago. "Because if it isn't, I don't know where to start."

He was spurred on by Columbia researcher and Nobel laureate Richard Axel, who recently noted, "We do not have a logic for the transformation of neural activity into thought and action." Papadimitriou wondered what would happen if he interpreted this "logic" as a programming language like Python: just as Python manipulates numbers, the brain's logic manipulates populations of neurons.

He and his team developed a computational system, the Assembly Calculus, that encompasses operations on assemblies, or large populations, of neurons that appear to be involved in cognitive processes such as imprinting memories, concepts, and words. In just the way Python programs can be compiled to machine code and execute, the Assembly Calculus can in principle be translated down to the language of neurons and synapses. The researchers were able to show, both analytically as well as through simulations, that the system is plausibly realizable at the level of neurons and synapses.

"So, we have finally articulated our theory about the nature of the 'logic' sought by Axel, and its supporting evidence," says Papadimitriou, who is also a member of the Data Science Institute. "Now comes the hard part, will neuroscientists take our theory seriously and try to find evidence that something like it takes place in the brain, or that it does not?"

With a new three-year grant from the National Science Foundation, the team is now working with experimental neuropsychologists at CUNY to carry out fMRI experiments in humans to check the predictions of their theory regarding language.

Reference: Papadimitriou, C. H., Vempala, S. S., Mitropolsky, D., Collins, M., & Maass, W. (2020). Brain computation by assemblies of neurons. Proceedings of the National Academy of Sciences. https://doi.org/10.1073/pnas.2001893117

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.