We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

The Perfect Recipe for Neuromorphic Systems

The Perfect Recipe for Neuromorphic Systems content piece image
Credit: Elisabetta Chicca
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

During the 1990s, Carver Mead and colleagues combined basic research in neuroscience with elegant analog circuit design in electronic engineering. This pioneering work on neuromorphic electronic circuits inspired researchers in Germany and Switzerland to explore the possibility of reproducing the physics of real neural circuits by using the physics of silicon.

The field of “brain-mimicking” neuromorphic electronics shows great potential not only for basic research but also for commercial exploitation of always-on edge computing and “internet of things” applications.

In Applied Physics Letters, Elisabetta Chicca, from Bielefeld University, and Giacomo Indiveri, from the University of Zurich and ETH Zurich, present their work to understand how neural processing systems in biology carry out computation, as well as a recipe to reproduce these computing principles in mixed signal analog/digital electronics and novel materials.

One of the most distinctive computational features of neural networks is learning, so Chicca and Indiveri are particularly interested in reproducing the adaptive and plastic properties of real synapses. They used both standard complementary metal-oxide semiconductor (CMOS) electronic circuits and advanced nanoscale memory technologies, such as memristive devices, to build intelligent systems that can learn.

This work is significant, because it can lead to a better understanding of how to implement sophisticated signal processing using extremely low-power and compact devices.

Their key findings are that the apparent disadvantages of these low-power computing technologies, mainly related to low precision, high sensitivity to noise and high variability, can actually be exploited to perform robust and efficient computation, very much like the brain can use highly variable and noisy neurons to implement robust behavior.

The researchers said it is surprising to see the field of memory technologies, typically concerned with bit-precise high-density device technologies, now looking at animal brains as a source of inspiration for understanding how to build adaptive and robust neural processing systems. It is very much in line with the basic research agenda that Mead and colleagues were following more than 30 years ago.

“The electronic neural processing systems that we build are not intended to compete with the powerful and accurate artificial intelligence systems that run on power-hungry large computer clusters for natural language processing or high-resolution image recognition and classification,” said Chicca.

In contrast, their systems “offer promising solutions for those applications that require compact and very low-power (submilliwatt) real-time processing with short latencies,” Indiveri said.

He said examples of such applications fall within “the ‘extreme-edge computing’ domain, which require a small amount of artificial intelligence to extract information from live or streaming sensory signals, such as for bio-signal processing in wearable devices, brain-machine interfaces and always-on environmental monitoring.”

Reference

Chicca and Indiveri. (2020) A recipe for creating ideal hybrid memristive-CMOS neuromorphic processing systems. Applied Physics Letters. DOI: https://doi.org/10.1063/1.5142089

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.