We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

The Quest for Exascale: Have the Goal Posts Changed in HPC?

The Quest for Exascale: Have the Goal Posts Changed in HPC?  content piece image
Have Artificial Intelligence and Machine Learning distracted the HPC community from exascale?
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 3 minutes

The High-Performance Computing (HPC) community has in recent years been dominated by the quest for Exascale systems – that is, systems that are capable of at least one exaflop, or a billion billion calculations per second. The interest in this quest has been somewhat drowned out by the noise in the computing industry over Artificial Intelligence (AI) and Machine Learning (ML) over the last couple of years. Is the quest for exascale over?

The term AI, like the word ‘Smart’, has embedded itself into our culture and internet giants like Amazon and Google have extended AI’s reach into every nook and cranny of our lives. Behind all of this are huge computing resources which a few years ago would have been the domain of central processing units (CPUs), but today NVIDIA and its Graphics Processing Units (GPUs) and to some extent Field Programmable Gate Arrays (FPGA) rule the roost in this new world of neural networks such as Apple’s Siri.

GPUs are the processors that typically run the graphics for a system. The CPU provides all of the main functionality of the system, like running Word or Excel, but doesn’t draw the pictures; that task is given to the GPU. When running complex graphics, for example Computer-Aided Design (CAD) or video games, it is necessary to run a dedicated GPU. Just over a decade ago, it was discovered that these GPUs could be put to other tasks rather than just creating and rendering images, they could be used for accelerating complex calculations. A GPU is a fixed circuit with a “set in stone” design. FPGAs are similar to GPUs but are not fixed, and instead are programmable and can be changed dependent on the task and programming that is loaded.

How do GPUs and FPGAs fit in with HPC? Computer chip manufacturers have not been standing around idly. IBM’s POWER9 server, together with tighter integration from NVIDIA’s GPUs, has produced a computational engine that will power two of the fastest systems on the planet – the Summit and Sierra supercomputers will be operational this year.

ARM is making its way into the traditional datacenter and HPC market, as the Cadmium ThunderX 2 gains traction. FUJITSU is already planning to use ARM to power the Post-K machine expected in 2020, promised to be a 1,000 petaflop beast.

Intel continues to progress in the market with its x86 processors and  appears to have discarded Xeon Phi, rolling features from the Phi processor into its Xeon processor, for example, the feature AVX512. AVX512 and indeed AVX are a set of instructions within the processor that takes advantage of the vector elements with the processor. What of Intel’s purchase of FPGA manufacturer Altera, Intel’s most expensive purchase to date? Surely this implies that FPGAs will heavily factor into Intel products going forward.

Each of these ventures will benefit from keeping exascale goals in mind, as they all consume a huge amount of power. The good news is that we can see that each project is making inroads into producing better performance in a smaller package. As Jensen Huang, President and CEO at NVIDIA says, “The more GPUs you buy, the more you save”, referring to his company’s new DGX-2, which replaces 300 Dual-CPU servers with a single DGX-2 system. Clearly, the potential power saving is on a huge scale.

But still not all applications lend themselves to being ported to GPU use. Exascale will be achieved by a mixture of processor technologies; a CPU aided by an accelerator, (either GPUs or something else) as data to GPUs must be fed to them by the CPU that is hosting the system and communicating with the rest of the system, managing storage etc.


A lot of the technologies used within AI can also be used within HPC as the GPUs, CPUs and interconnecting technologies naturally lend themselves for use in either HPC or AI applications. I believe that even fp16/fp32 tensor cores, special processors that have been optimized for deep learning applications, will have a place in future applications.

AI has brought a new class of user to HPC providers and to the non-computing proficient user. For example, the English researcher that wants to scan all of their documents to see if a particular author wrote the entire piece of work, or if a university student that is plagiarizing text from the internet. A safari park recently employed ML techniques from astrophysics to identify the thermal and chemical compositions of distant stars so that they could locate and count endangered animals hidden in the bush and help game reserves. AI and ML are opening up new avenues of computing for users that haven’t traditionally used HPC.

So back to my original question, is Exascale over? The Exascale goal hasn’t gone away, it just seems to be hidden by the uptake in AI.