How Cloud Computing Has Forever Changed HPC
Article Feb 25, 2019 | by Rob Farber
Supercomputing 2018 provided clear demonstrations that cloud-based High Performance Computing (HPC) has forever changed HPC and is having profound societal impacts in the medical community as well as on HPC supercomputer centers themselves. One example is Frontera, the latest National Science Foundation-funded system at the Texas Advanced Computing Center (TACC). TACC’s Frontera, which will be the fastest supercomputer at any U.S. university and among the most powerful in the world, has a cloud component1. Another example is UberCloud, who received three prestigious HPC community awards at SC18.
UberCloud, in particular, demonstrated that cloud-based HPC is a platform that now has significant societal impact. In recognition of that a community award was given for an effort in partnership with India’s National Institute of Health that replaced a highly risky brain-invasive procedure for schizophrenia – and potentially Parkinson’s disease, depression, and other brain disorders - with a non-invasive low-risk, low-cost treatment. Wolfgang Gentzsch (President and Co-Founder), UberCloud observes, “The use of cloud-based HPC in personalized medicine demonstrates the adoption and acceptance of HPC throughout all aspects of our global society”.
Cloud as a component of modern HPC centers
Meanwhile, Dan Stanzione (Executive Director of the TACC) discussed how cloud-based computing has been incorporated into the Frontera petascale computing system during a pre-release meeting. Stanzione noted that “Giving users access to the cloud means they can experiment with the latest architectures as cloud providers are deploying those all the time.” Feedback from running in the cloud gives TACC valuable information on what they need to consider for future deployments. Frontera will be based on the latest Intel Xeon Scalable Cascade Lake processors to chew through a diverse array of scientific workloads.
The UberCloud path from idea to projects with societal impact
In 2012, UberCloud started The UberCloud Experiment. Each experiment is a free, community driven effort that gives a team of participants 1,000 hours of time to explore the end-to-end processes in utilizing on-demand HPC resources in the cloud. The experiment provides team members hands-on experience in using remote cloud-based computing resources for their own projects.
At the conclusion of the project, team members are required to write a case study describing their experience, the lessons learned, and best practices. Compendiums of these team efforts are available on the UberCloud website. UberCloud experiments span a wide range of HPC efforts including Aerodynamics, Fluid Flow, Multi-physics, Finite Element Analysis, Computational Chemistry, Life Sciences, and Data Analysis2.
In 2012, the idea of running HPC in the cloud was a remarkably visionary concept. At that time, many in the HPC community considered HPC in the cloud as something new, definitely not mainstream, and generally something not worth wasting valuable work time exploring. On the plus side, cloud computing was beginning to get serious attention and demonstrate real potential. For example, Amazon Web Services benchmarked a cloud instantiation that was ranked as the 72nd fastest supercomputer in the world according to the June 2012 TOP500 list3.
It was in this environment that UberCloud began evangelizing The UberCloud Experiment. The response was strong, as over 80 teams participated in the 2012 effort. The interest was global as these teams were comprised of individuals from over 48 countries. Forward thinking companies also decided to join as sponsors. Intel Corporation, for example, was the first company to sign up and has continued as a sponsor of The UberCloud experiment to this day.
As of 2018, The UberCloud Experiment has grown to include over 3500 organizations and individuals from more than 70 countries. More than 190 cloud experiment teams have been formed.
The UberCloud Experiment has also attracted companies as participants, which makes the compendiums interesting reading for both academic and enterprise users.
On the enterprise side, The UberCloud Experiment compendiums provide information about the performance of cloud software from major providers such as ANSYS, Siemens, Dassault Systems, NUMECA and more – as well as popular academic packages. Not limited to a single cloud provider, the compendiums include experience running on various UberCloud public cloud, private cloud, and HPC hosting providers including Microsoft, HPE, and Advania. These providers make the UberCloud more predictable as they provide tried and true platforms validated by Intel, Microsoft, ANSYS, and more4.
Enabling non-invasive personalized medicine
A case study in partnership with the NIMHANS National Institute of Mental Health and Neuro Sciences in India represents an initial effort to use HPC to create personalized non-invasive electro-stimulation of the human brain to treat schizophrenia and other neuropsychiatric disorders. Based on results from a Dassault SIMULIA brain simulation, physicians were able to precisely and non-invasively target regions of the brain without majorly affecting the healthy regions of the patient’s brain.
This computation-based procedure replaces a highlight risky invasive procedure to provide artificial stimulus using surgical implants or chemical stimulation deep within the patient’s brain and/or spinal cord.
The idea behind the treatment is to increase cortical brain activity in specific brain areas that are under-aroused, or alternatively decrease activity in areas that are overexcited. More information can be found in the paper, “Personalized Neuromodulation: A Computational Workflow to Guide Noninvasive Clinical Treatment of Neurological and Psychiatric Disorders”5.
Bill Mannel (VP and GM, HPC Segment Solutions and Apollo Servers, Data Center Infrastructure Group, and AI at Hewlett Packard Enterprise) wrote, “Operations like the Living Heart Project are uniting industry-leading researchers, doctors, educators, and technology manufacturers to reach a higher standard for personalized medicine. The results are changing lives7.”
Over the past seven years, cloud-based HPC has matured to the point that, in collaboration with established HPC projects, it is changing lives and significantly impacting society. Not just a means to provide additional cycles during periods of excess workload, the cloud provides an infrastructure-free pathway for many HPC users who need to run small problems or even problems that need to scale to the size of a TOP500 supercomputer.
The global participation of over 3,500 enterprise and academic organizations and users in The UberCloud Experiment shows its appeal across HPC. In short, it provides a tool that, in the hands of skilled individuals, can have significant societal impact.
Rob Farber is a global technology consultant and author with an extensive background in HPC and in developing machine learning technology that he applies at national labs and commercial organizations. Rob can be reached at firstname.lastname@example.org.
This article was produced as part of Intel’s HPC editorial program, with the goal of highlighting cutting-edge science, research and innovation driven by the HPC community through advanced technology. The publisher of the content has final editing rights and determines what articles are published.
- https://www.3ds.com/fileadmin/PRODUCTS-SERVICES/SIMULIA/Resources-center/PDF/2018-SAoE-Personalized-Neuromodulation.pdf by Venkatasubramanian, et. al
Disaster Recovery (DR) is a vital process to ensure the rapid recovery of an organization’s applications, data and hardware that are critical to operations in the event of a natural disaster, network or hardware failure or human error. In this article, we explore how the public cloud is making ideal DR a reality.