We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Spiral Circuits Could Make for More Efficient AI Devices

Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

Researchers from the Institute of Industrial Science at The University of Tokyo designed and built specialized computer hardware consisting of stacks of memory modules arranged in a 3D-spiral for artificial intelligence (AI) applications. This research may open the way for the next generation of energy efficient AI devices.

Machine learning is a type of AI that allows computers to be trained by example data to make predictions for new instances. For example, a smart speaker algorithm like Alexa can learn to understand your voice commands, so it can understand you even when you ask for something for the first time. However, AI tends to require a great deal of electrical energy to train, which raises concerns about adding to climate change.

Now, scientists from the Institute of Industrial Science at The University of Tokyo have developed a novel design for stacking resistive random-access memory modules with oxide semiconductor (IGZO) access transistor in a three-dimensional spiral. Having on-chip nonvolatile memory placed close to the processors makes the machine learning training process much faster and more energy efficient. This is because electrical signals have a much shorter distance to travel compared with conventional computer hardware. Stacking multiple layers of circuits is a natural step, since training the algorithm often requires many operations to be run in parallel at the same time.

"For these applications, each layer's output is typically connected to the next layer's input. Our architecture greatly reduces the need for interconnecting wiring," says first author Jixuan Wu.

The team was able to make the device even more energy efficient by implementing a system of binarized neural networks. Instead of allowing the parameters to be any number, they are restricted to be either +1 or -1. This both greatly simplifies the hardware used, as well as compressing the amount of data that must be stored. They tested the device using a common task in AI, interpreting a database of handwritten digits. The scientists showed that increasing the size of each circuit layer could enhance the accuracy of the algorithm, up to a maximum of around 90%.

"In order to keep energy consumption low as AI becomes increasingly integrated into daily life, we need more specialized hardware to handle these tasks efficiently," explains Senior author Masaharu Kobayashi.

This work is an important step towards the "internet of things," in which many small AI-enabled appliances communicate as part of an integrated "smart-home."

The paper "A Monolithic 3D Integration of RRAM Array with Oxide Semiconductor FET for In-memory Computing in Quantized Neural Network AI Applications" was presented at the VLSI Technology Symposium 2020.

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.