We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Ceph Open Source File System Empowers Flatiron Institute’s Extreme Scale HPC

Ceph Open Source File System Empowers Flatiron Institute’s Extreme Scale HPC  content piece image
Credit: Pixabay
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 5 minutes

It’s a lot of pressure to support over 500 scientists scattered over multiple locations and the HPC power those researchers need for breakthroughs, but the Flatiron Institute is doing it every day.


Flatiron is an internal division of the Simons Foundation dedicated to advancing scientific research. The Institute concentrates its scientific study on five fields: astrophysics, biology, mathematics, neuroscience and quantum physics, using modern computing tools to further scientific knowledge.


Years ago, Flatiron’s need for compute power was minimal. The organization supported only a small number of scientists with a combination of servers and desktop computers. Today, the hundreds of researchers the Institute supports – located in dispersed offices – perform highly complex computations, modeling and analytics workloads.



Using data from powerful telescopes, Chris Hayward of the Flatiron Institute’s Center for Computational Astrophysics and collaborators developed a simulation to physically model and visualize galaxy cluster SPT2349-56 and predict how it will change in the future. Simulations such as this one require millions of CPU hours and produce 10s of TBs of data. Image courtesy of Simons Foundation.


The ever-growing needs of these researchers necessitated that Flatiron provides them with much more powerful computing resources with extreme scalability and storage capacity, and speed to handle ever-increasing storage requirements. Given this diversity of Flatiron researchers’ disciplines, the Institute’s HPC architecture also needed the ability to handle various projects without compromising performance. For example, astronomy research generates the most data during simulations of galaxies or black holes. On the other end of the spectrum, genomic research begins with an enormous volume of input data. Because the requirements vary widely among the different scientific disciplines, the Flatiron Institute adopted a novel solution for HPC storage deployment using open-source Ceph software-defined storage as the best option for their needs.


Tapping the power of Ceph


Ceph is an open source distributed storage system that provides scalable and dependable storage for block, object and file storage in one unified system. As a Linux Foundation project, Ceph includes contributors from businesses, governments and academic organizations collaborating to advance and promote the technology.


Given Ceph’s flexibility, many scientists have used it to support their HPC storage system. For example, Hewlett Packard Enterprise (HPE) and SUSE worked on reference architectures for ProLiant and Apollo Storage Server lines to create a software-defined and certified enterprise-ready Ceph storage solution. In another usage scenario, running Red Hat Ceph Storage on QCT servers gives organizations the ability to iterate scenarios for varying workloads and scale to thousands of nodes. While these scenarios demonstrate the flexibility and scalability Ceph offers, Flatiron Institute's HPC environment takes the CephFS filesystem to its limits. Ceph provided the Flatiron Institute with several benefits, including its file (CephFS), block (RBD), and object (RGW) interfaces for data so it can run on a variety of hardware.


Ceph offered the Institute multiple benefits that exceeded other HPC solutions its team evaluated. As Dr. Ian Fisk, Scientific Computing Core co-director, Flatiron Institute, put it, “When past systems could not meet our scientists’ growing demands, we lost valuable research time. With Ceph behind our HPC system, we now have the scale, performance, and reliability to enable breakthrough science.”


Benefits abound


The Flatiron HPC architecture needed to be future-proofed. As the number of scientists grows and their working data sets become larger, the Institute must have the ability to scale the system rapidly. The Flatiron team also needed HPC components to meet researchers' performance and data integrity assurance requirements. Plus, because Ceph is not limited to a specific set of servers or drives, Flatiron has the freedom to select the hardware that works best for them. Even during an upgrade, Ceph does not suffer from downtime, so scientists benefit from exceptional availability supported by Ceph. Added Dr. Fisk, “For us, it’s all about scale. Data sets can grow exponentially as researchers take on increasingly complex projects. With Ceph and carefully-selected hardware in place, we can grow our storage capacity easily without compromising performance or uptime.”


The Ceph Foundation members include premier sponsors and founding companies, including Intel, Red Hat, Samsung, SUSE, Western Digital and others. Many other companies have joined as general members and supporters of the Foundation. The group remains dedicated to the open source community to make Ceph faster to deploy, simpler to manage and easier to use. For example, as a founding member of Ceph, Intel’s efforts over the years focused on three key areas. First, the integration of Erasure Coding provided several advancements for greater storage efficiency. Intel also made important contributions to BlueStore, which offered essential capabilities like consistency groups and techniques that use CPU offloading to accelerate compression and encryption.


A third contribution makes Ceph easier to manage. Intel developed Virtual Storage Manager (VSM), commonly known in the open source community as Ceph Dashboard. VSM helps OEMs ensure consistency when using pre-defined, standard cluster configurations. It also aids in installation and operational reliability while reducing support costs. VSM supports HPC clusters that employ a mix of Solid State storage, SSD-cached HDDs and hard disk drives. Ultimately, this helps HPC administrators organize servers and storage devices by intended use cases and performance characteristics.


Other Intel contributions support “Rook,” a cloud-native storage orchestrator for Kubernetes. Since Rook automates many tasks for storage administrators, it simplifies activities, including system monitoring, provisioning, resource management and disaster recovery. These capabilities make Rook extremely valuable to administrators since it helps distributed storage systems perform tasks like self-healing and automatic storage service scaling.


Other advancements provided by Ceph’s founding members include client and server-side block and object caching for Ceph, which improves average and tail latency performance by embracing fast storage and memory technologies. Plus, forward-looking Ceph contributions will support future generations of NVMe, CXL, accelerators and high-performance, low-latency storage use cases.


Over time, the Crimson OSD project aims to enhance Ceph CPU performance and efficiency for scenarios with fast networking devices and new storage and memory technologies like persistent memory and ZNS SSDs.


Expert tip: Start small and grow


Dr. Fisk described a few of Flatiron’s keys to success when working with Ceph. The team first endeavored to find the best ways to manage Ceph, identify bottlenecks and optimize the system for scientific workloads by first testing with a smaller-scale HPC platform. This "start small" method assisted them in scaling up the system while avoiding many technical problems.


While open-source systems can run on various hardware, those components are not necessarily optimized. To ensure data redundancy and improve the system's uptime, speed and reliability, the Flatiron team adopted best practices and testing methods for hardware selection and possible failure scenarios.


Flatiron's HPC system can now read and write quickly to its over 4,000 drives for storage with the aid of Ceph and 3rd Gen Intel Xeon scalable processors. These ingredients meet the intense demands involved with researchers’ highly complicated simulations.


Dr. Andras Pataki, PhD, senior data scientist, Scientific Computing Core, Flatiron Institute, noted, “Ceph and the Intel Xeon processors provide us an unbeatable combination for HPC. With the latest product iterations in place, we’ve seen two-to-three times faster networking performance, delivering the results our researchers need faster than ever before. Plus, only one disk drive needed replacement in the last five years.”


Time will tell how many breakthroughs the researchers supported by the Flatiron Institute will achieve. But they’ll certainly have all the HPC power they need for many years into the future.


About the author:

Rob Johnson spent much of his professional career consulting for a Fortune 25 technology company. Currently, Rob owns Fine Tuning, LLC, a strategic marketing and communications consulting company based in Portland, Oregon. As a technology, audio, and gadget enthusiast his entire life, Rob also writes for TONEAudio Magazine, reviewing high-end home audio equipment.


This article was produced as part of Intel’s editorial program, with the goal of highlighting cutting-edge science, research and innovation driven by the HPC and AI communities through advanced technology. The publisher of the content has final editing rights and determines what articles are published.