We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Magnetic Circuits Make Big Data Processing More Energy-efficient

Magnetic Circuits Make Big Data Processing More Energy-efficient content piece image
Credit: Pixabay
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

The rapid progression of technology has led to a huge increase in energy usage to process the massive troves of data generated by devices. But researchers in the Cockrell School of Engineering at The University of Texas at Austin have found a way to make the new generation of smart computers more energy efficient.

Traditionally, silicon chips have formed the building blocks of the infrastructure that powers computers. But this research uses magnetic components instead of silicon and discovers new information about how the physics of the magnetic components can cut energy costs and requirements of training algorithms — neural networks that can think like humans and do things like recognize images and patterns.

"Right now, the methods for training your neural networks are very energy-intensive," said Jean Anne Incorvia, an assistant professor in the Cockrell School's Department of Electrical and Computer Engineering. "What our work can do is help reduce the training effort and energy costs."

Incorvia led the study with first author and second-year graduate student Can Cui. Incorvia and Cui discovered that spacing magnetic nanowires, acting as artificial neurons, in certain ways naturally increases the ability for the artificial neurons to compete against each other, with the most activated ones winning out. Achieving this effect, known as “lateral inhibition,” traditionally requires extra circuitry within computers, which increases costs and takes more energy and space.

Incorvia said their method provides an energy reduction of 20 to 30 times the amount used by a standard back-propagation algorithm when performing the same learning tasks.

The same way human brains contain neurons, new-era computers have artificial versions of these integral nerve cells. Lateral inhibition occurs when the neurons firing the fastest are able to prevent slower neurons from firing. In computing, this cuts down on energy use in processing data.

Incorvia explains that the way computers operate is fundamentally changing. A major trend is the concept of neuromorphic computing, which is essentially designing computers to think like human brains. Instead of processing tasks one at a time, these smarter devices are meant to analyze huge amounts of data simultaneously. These innovations have powered the revolution in machine learning and artificial intelligence that has dominated the technology landscape in recent years.

This research focused on interactions between two magnetic neurons and initial results on interactions of multiple neurons. The next step involves applying the findings to larger sets of multiple neurons as well as experimental verification of their findings.

Reference

Cui et al. (2020) Maximized Lateral Inhibition in Paired Magnetic Domain Wall Racetracks for Neuromorphic Computing. IOP Nanotechnology. DOI: https://doi.org/10.1088/1361-6528/ab86e8

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.