Infomorphic Neurons Bring Brain-Like Learning to AI Networks
Infomorphic neurons enable self-organized, independent learning, enhancing AI flexibility and efficiency.

Complete the form below to unlock access to ALL audio articles.
Both, human brain and modern artificial neural networks are extremely powerful. At the lowest level, the neurons work together as rather simple computing units. An artificial neural network typically consists of several layers composed of individual neurons. An input signal passes through these layers and is processed by artificial neurons in order to extract relevant information. However, conventional artificial neurons differ significantly from their biological models in the way they learn. While most artificial neural networks depend on overarching coordination outside the network in order to learn, biological neurons only receive and process signals from other neurons in their immediate vicinity in the network. Biological neural networks are still far superior to artificial ones in terms of both, flexibility and energy efficiency.
The new artificial neurons, known as infomorphic neurons, are capable of learning independently and self-organized among their neighboring neurons. This means that the smallest unit in the network has to be controlled no longer from the outside, but decides itself which input is relevant and which is not. In developing the infomorphic neurons, the team was inspired by the way the brain works, especially by the pyramidal cells in the cerebral cortex. These also process stimuli from different sources in their immediate environment and use them to adapt and learn. The new artificial neurons pursue very general, easy-to-understand learning goals: “We now directly understand what is happening inside the network and how the individual artificial neurons learn independently”, emphasizes Marcel Graetz from CIDBN.
Want more breaking news?
Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.
Subscribe for FREEBy defining the learning objectives, the researchers enabled the neurons to find their specific learning rules themselves. The team focused on the learning process of each individual neuron. They applied a novel information-theoretic measure to precisely adjust whether a neuron should seek more redundancy with its neighbors, collaborate synergistically, or try to specialize in its own part of the network's information. “By specializing in certain aspects of the input and coordinating with their neighbors, our infomorphic neurons learn how to contribute to the overall task of the network”, explains Valentin Neuhaus from MPI-DS. With the infomorphic neurons, the team is not only developing a novel method for machine learning, but is also contributing to a better understanding of learning in the brain.
Reference: Makkeh A, Graetz M, Schneider AC, Ehrlich DA, Priesemann V, Wibral M. A general framework for interpretable neural learning based on local information-theoretic goal functions. Pro Nat Acad Sci USA. 2025;122(10):e2408125122. doi: 10.1073/pnas.2408125122
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.