We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Durham University Seeks to Unlock Mysteries of the Cosmos

Durham University Seeks to Unlock Mysteries of the Cosmos  content piece image
A Snapshot of the evolution of an individual galaxy. Each point is an individual star, and a super-massive black hole lurks in the central bright spot. Studying precisely the dynamics of stars in galaxies is a crucial step towards understanding the nature of the dark matter via its influence on the motion of stars. (Credit: Durham University)
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 5 minutes

Centuries of science have slowly unraveled the enigmas of astrophysics. Despite advances, many details remain elusive. What mechanisms drive the formation of galaxies? What physics enable the complex gravitational activity surrounding black holes? What is the nature of dark energy, which scientists believe comprises 75% of the universe?

The cosmos does not reveal these secrets easily. A team at Durham University seeks to change all that with an ambitious open-source project, dubbed EAGLE-XL (Evolution and Assembly of GaLaxies and their Environments), to simulate our entire universe in a level of detail never attempted before. Modern high-performance computing (HPC) infrastructure, paired with the latest visualization technologies, and customized software offer the opportunity for researchers to unlock some of the most perplexing scientific mysteries.

To visualize the dynamics of the universe, the team behind EAGLE-XL led by Richard Bower, Professor of Cosmology at Durham University, plans to start at the beginning-- literally. Their efforts intend to model not only the diversity of mass and energy observed across the universe today, but the mostly-uniform state at the moment of the Big Bang billions of years ago. By modeling the progression of the universe’s expansion from inception, Bower’s team can tap details from the simulations to better interpret interaction of profoundly complex physics behind hydrodynamics and gravitation. In turn, those revelations provide insights into the formation process of massive objects like stars, and how billions of them converge to form galaxies. The simulations also demonstrate the life cycles of stars which end their life in various ways including spectacular explosions called supernovae, or as a meal for black holes. Additionally, the resulting information will help reveal how our observations of the universe link to theories posed and pursued by researchers today. 

Under the Hood

Seeking answers to the density and cooling process encouraging star formation, Bower’s team developed a new simulation code, dubbed SWIFT (SPH With Interdependent Fine-grained Tasking). SWIFT code supports Software Defined Visualization (SDVis), an open source initiative among Intel and industry collaborators, which improves performance, resolution, and efficiency of commonly used, data-centric visualization solutions. As a result, SWIFT can capture the minute details that translate into more significant scientific insights. 


A snapshot from a SWIFT simulation showing the complex structures formed by the dark matter in the universe. The bright region in the center is a cradle, formed by gravity, where gas can cool and form stars that will ultimately form a galaxy with billions of stars like our Sun in it. (Credit: Durham University)


Matthieu Schaller, a research associate at Durham’s Institute for Computational Cosmology and Bower’s colleague, is relishing the project’s achievements. “We are finally able to tackle projects which were not possible a couple of years ago. Simply put, the code was not there yet. With much collaborative work, we are now ready to explore new science.”

Durham’s earlier modeling experiments incorporated seven billion particles, occupying the DiRAC supercomputer’s 4,000 cores continuously for more than six weeks, followed by many months of post-processing and visualization on smaller machines. While the SWIFT code exceeds the performance of its predecessor, Gadget, by 30 times, every aspect of the simulation software and hardware is scrutinized continuously for possible performance benefits. As a result, the same simulation can be done 30 times faster while exploring an alternative physical model, and the team at Durham can run simulations 30 times larger to study rare objects.


Strong-scaling test of SWIFT against the reference code Gadget-2 on a representative galaxy formation problem. SWIFT fully exploits the vectorization capability of the Haswell nodes, and an even larger performance gap is expected on Xeon Skylake with the wider vectors and faster memory bandwidth. (Credit: Durham University)


Contributing to Durham University’s work, the Intel Parallel Computing Center (Intel PCC) supports the effort with assistance optimizing and parallelizing the simulation code to best leverage the onboard capabilities of new Intel Xeon processors.

With ongoing code optimizations, modern processors have created substantial performance increases for the simulations. The latest Intel Xeon Scalable processors, utilizing wider vector registers and higher overall bandwidth, exceed past performance numbers by more than 40 times according to Schaller.

Overcoming I/O Bottlenecks

Many additional innovations must converge to enable the herculean virtualization challenges associated with creating an artificial universe. As SWIFT simulations increase in complexity, the data sets the software must accommodate become increasingly cumbersome too. The sheer volume of information used and collected in the virtual universe creates bottlenecks for conventional storage processes. The system must pause periodically to write to disk. Even with the Lustre parallel distributed file system’s HPC-ready capabilities for handling enormous data sets, the advanced simulation parameters necessitate a faster data input and output (I/O) process to keep pace with the writing of meaningful simulation data to disk for it to be analyzed by the field scientists.

To maximize I/O the Durham team also collaborated with The HDF Group; a non-profit organization focused on advancing open-source data management technologies for researchers. The HDF Group’s development team has exceptional expertise in the effective manipulation of massive datasets. Consistent with their mission, The HDF Group’s latest Hierarchical Data Format release (HDF5) features technologies and tools maximizing not only the file format but also the library and model for managing data. By working with the HDF developers, Bower’s team seeks the most advanced ways to accommodate the terabytes of data produced in their simulations.

Today, Durham’s DiRAC-2 HPC system is indeed no slouch -- capable of writing 1.5 terabytes of data in a mere 80 seconds when leveraging 256 nodes and 128 object storage targets thanks to software optimization. At that pace, the team believes it has achieved 90% of the Lustre-based configuration’s possible performance limit. Still, they keep reaching for a higher bar and routinely run simulations on top HPC systems in Europe too.

Team Effort

Durham’s next creative approach involves improving data transfer speeds using streaming I/O. Because the datasets for EAGLE-XL are so large, storing that information proves challenging. A different method of data analysis on-the-fly can prove highly beneficial to the overall process. The idea requires monitoring each simulation particle, and if any values change by a pre-defined amount, that change is logged to a file. Rather than processing entire data sets, the team can gain substantial performance enhancements by recording the differences immediately, then worrying about big data storage after a completed test run. The process reduces the time required for the post-processing of the simulations, ultimately reducing the time-to-science of the whole simulation process. According to Schaller, “One file per MPI rank, [one compute node in a cluster] avoids network pressure and remote storage. Ultimately this makes the entire process faster without losing accuracy. The next step for us is scaling up to use non-volatile memory.”

In addition to enhanced streaming I/O mechanisms, an additional crucial step involves Durham’s work in collaboration with International Centre for Radio Astronomy Research (ICRAR) at the University of Western Australia (UWA) in Perth. With massive data sets, a mechanism by which to analyze on-the-fly proves highly beneficial. Together, the group plans to adapt and optimize UWA’s VELOCIraptor analysis tool, which aids in astrophysics simulations by rapidly identifying galaxies and interesting events, for use among other elements of the EAGLE-XL project.

Universal Breakthroughs

Even with these substantial benefits, the Durham team doggedly pursues their continual need for speed, including the possibility of additional compute hours on other powerful HPC systems around the globe – like the Stampede2 system at the Texas Advanced Computing Center – to demonstrate scale on 40,000 Intel Xeon Scalable processor cores. As long as funding remains available to further the project, EAGLE-XL will continue pushing the boundaries of science.

Alongside many others around the world supporting their efforts, the team at Durham University including Pedro Gonnet, Aidan Chalk, Peter Draper, James Willis, Josh Borrow, Loic Hausammann and Bert Vandenbroucke, remain dedicated to overcoming any remaining hurdles standing in the way of cosmological revelations.

However, current efforts yield some fascinating details. For example, Bower notes, “We know that black holes will determine the future fate of the Universe.” Schaller also reflects on revelations from the research to date. “We know there is more to the cosmos than gravity and hydrodynamics. The complex physics to accurately describe the universe involves gas cooling, star formation, stellar evolution, and many other phenomena. With astrophysicists’ help developing ‘new physics,’ we can then utilize auto-vectorization to integrate it into our simulation system and extend our scientific efforts even further.”

By building flexibility into SWIFT, astrophysicists can test many simulations of the cosmos based on different theories and ratios of matter and energy. The process will help scientists explore models that might lead to a universe different from the one in which we live. Might there be parallel universes? If so, how would they look? Could they theoretically encourage life as ours did?

Based on these discoveries and more, the team’s future work may well re-define our understanding of interactions among the mass and energy components comprising our universe, giving us insight into, well, everything. 

Rob Johnson spent much of his professional career consulting for a Fortune 25 technology company. Currently, Rob owns Fine Tuning, LLC, a strategic marketing and communications consulting company based in Portland, Oregon. As a technology, audio, and gadget enthusiast his entire life, Rob also writes for TONEAudio Magazine, reviewing high-end home audio equipment.

This article was produced as part of Intel’s HPC editorial program, with the goal of highlighting cutting-edge science, research and innovation driven by the HPC community through advanced technology. The publisher of the content has final editing rights and determines what articles are published.