We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Exascale Computing to Unlock the Mysteries of the Human Brain

Exascale Computing to Unlock the Mysteries of the Human Brain content piece image
Neurons rendered from the analysis of electron microscopy data. The inset shows a slice of data with colored regions indicating identified cells. Tracing these regions through multiple slices extracts the sub-volumes corresponding to anatomical structures of interest. Credit: Nicola Ferrier, Narayanan (Bobby) Kasthuri, and Rafael Vescovi, Argonne National Laboratory
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 5 minutes

The human brain sequesters many mysteries. How does cognitive development take place? Why does the brain age? How does it help us learn? What causes brain diseases? Dr. Nicola Ferrier, a senior computer scientist at Argonne National Laboratory, is partnering with researchers from the University of Chicago, Harvard University, Princeton University, and Google in the pursuit of these answers. Supported by the Argonne Leadership Computing Facility’s Aurora Early Science Program, the collaborative effort is preparing to use Argonne’s future exascale system to understand the larger structure of the brain, and the ways each minute brain cell – or neuron – connects with others to form the brain’s cognitive pathways. Once that information is unlocked, the team hopes their arduous endeavor will reveal information to benefit humanity, like potential cures for neural diseases. Of course, many hurdles lie ahead. The team must first understand a “normal” brain state down to a cellular level. As the saying goes, the devil is in the details.

Challenges ahead

Finding answers means that Ferrier and her colleagues around the nation must gather detailed information about the brain’s structure, use that experimental data to form a computational pipeline, and map the connections among neurons. Doing so involves tapping enormous volumes of high-resolution imagery obtained from brain tissue samples viewed under a microscope. “We’re working together now to build specialized algorithms designed to analyze image data from brain tissue. The ‘output’ of the algorithms is what we call a connectome,” noted Ferrier.

Ferrier acknowledges the team is still working out the ideal ways to handle those massive data sets.  “We have access to troves of data. The big challenge we face is not just obtaining data but managing the sheer volume of it. For example, one cubic centimeter of brain tissue may sound tiny, but analysis of the imagery from that small sample can generate petabytes of data. A teeny sample like that, though, does not give us the big picture understanding we want. If we try to compare two entire brains or multiple brains, that’s a monumental challenge involving exabytes of data.”

Images of brain sections obtained from electron microscopes give scientists access to details at a very granular level. For instance, the photos reveal the nuances of cell membranes and even capture details about the mitochondria which serve as tiny power sources within the neurons.

Today, aspects of the data collection and analysis process require human intervention. However, as the scope of data increases the approach to tackling that data must evolve too. “Ultimately, we want an end-to-end pipeline to evaluate multiple samples at an extreme scale,” said Ferrier. “Deep neural networks can be successful at identifying specific characteristics from images. Artificial intelligence capabilities like deep learning offer us new tools to identify cells and other structures within the images. As a result, we have very accurate and efficient computational tools to assist us in our endeavor. However, at that level, we need to have compute systems and artificial intelligence work independently, without human intervention. Today that is a roadblock we are working hard to circumvent, but we will get there.”

In neuroscience, precision matters. The analysis algorithms must be tested to ensure they align perfectly with observed data. Ferrier describes the challenge, “Neuroscientists are interested in helping us analyze our level of accuracy and sharing their perspective on what needs to improve. We must adjust our algorithms to avoid incorrect mapping or misinterpretation of neural structures. Accuracy is critical, so our algorithms must have underlying intelligence to help identify errors automatically. With labeled data sets and deep neural networks, our team can compare how well the defined algorithms compare with the information already obtained by neuroscientists. It is their many-year imaging effort which makes possible the data set we use for developing algorithms.”

Neurons rendered from the analysis of electron microscopy data. The inset shows a slice of data with colored regions indicating identified cells. Tracing these regions through multiple slices extracts the sub-volumes corresponding to anatomical structures of interest. (Image credit: Nicola Ferrier, Narayanan (Bobby) Kasthuri, and Rafael Vescovi, Argonne National Laboratory)

Aurora, the nation’s first exascale HPC system

Given the sheer scope of data sets involved in neural research, work like Ferrier’s requires enormous computing power. “If we want to understand the scope of a human brain, exascale-capable computing is mandatory,” she said.

Fortunately, Argonne National Laboratory is hard at work to make exascale computing a reality. Their forthcoming exascale computer, dubbed Aurora, is expected to exceed the world’s fastest computing systems, reaching performance capability on the order of a billion-billion calculations per second. While the system will empower the large-scale neuroscience workloads spearheaded by Ferrier, the system will also support many other research projects like global climate analysis, the study of subatomic particles, and astrophysics.

Aurora’s multi-node prowess will be made possible by the most advanced hardware on the planet. Future generation Intel Xeon Scalable processors and a new Xe architecture will enable the performance characteristics and features to speed most demanding workloads like advanced visualization, analysis, artificial intelligence, and of course, neuroscience. 

“Building an exascale system of this magnitude involves an enormous team effort. Without the support of companies like Intel and Cray, Aurora could not exist,” noted Ferrier. “Having access to the first exascale system at Argonne means that we can be the first people to address these extremely complicated brain mapping problems. We finally have the resources to tackle challenges which were impossible previously. In our work with neurologists, we need to examine brains at multiple stages of development and take multiple samples at each stage. If each of those samples generates an exabyte of data needing analysis, our work is not simply a data volume challenge. It involves developing the tools to be able to compare them. We need the ability to process vast volumes of data, and that’s what exascale computing will enable. Thanks to Aurora, we can make progress against problems that nobody else can address today.”

The team’s algorithms will examine brain tissue one minuscule “slice” at a time. From there the algorithm forms a three-dimensional image by stacking together multiple slices. With that information, the system can trace individual neurons in a sample. Per Ferrier, evaluating just one neuron’s branches in this way is a huge job. However, their goal involves mapping millions of neurons and the connecting synapses among them. Tracing all those slices and all those neural branches creates an arduous task. With a smile, she added, “Ironically, understanding something so tiny is a monumental undertaking.”

Beyond neuroscience

“As a computer scientist, I am driven by my passion for developing algorithms. In the past, some algorithms proved very challenging for HPC systems. With Aurora, compute power will not constrain us. From my perspective, the sky’s the limit in terms of development. Fortunately, with all that capability at our disposal, I won’t have to worry anymore whether I’ll be able to run an algorithm in my lifetime,” Ferrier said.

Once perfected, the innovative data capture methods and analysis tools created for her current mapping project will have an impact beyond neuroscience too. According to Ferrier, “Once we have a handle on managing extreme- sized data sets, our approaches will provide important tools for domains of science beyond neuroscience. The problems we solve today will help us in other scientific disciplines too.”

“My favorite thing about the work is that I enjoy collaborating with others with a common passion for science. I have the unique opportunity to apply my skills to help neuroscientists, materials experts, manufacturers, and other researchers trying to unlock the mysteries of science.” Ferrier added, “I find it extremely rewarding that the work we do helps other forms of science, plus I get a unique opportunity to learn about those other fields. That may be the best part of my job.” 

Rob Johnson spent much of his professional career consulting for a Fortune 25 technology company. Currently, Rob owns Fine Tuning, LLC, a strategic marketing and communications consulting company based in Portland, Oregon. As a technology, audio, and gadget enthusiast his entire life, Rob also writes for TONEAudio Magazine, reviewing high-end home audio equipment.

This article was produced as part of Intel’s HPC editorial program, with the goal of highlighting cutting-edge science, research and innovation driven by the HPC community through advanced technology. The publisher of the content has final editing rights and determines what articles are published.