Researchers are tapping Argonne and NCSA supercomputers to tackle the unprecedented amounts of data involved with simulating the Big Bang.
We talk about building the ‘universe in the lab,’ and simulations are a huge component of that.” – Katrin Heitmann, Argonne cosmologist
What distinguishes the new work from typical workflows is the scale of the computation, the associated data generation and transfer and the scale and complexity of the final analysis. Researchers also tapped the unique capabilities of each supercomputer: They performed cosmological simulations on the ALCF’s Mira supercomputer, and then sent huge quantities of data to UI’s Blue Waters, which is better suited to perform the required data analysis tasks because of its processing power and memory balance.
For cosmology, observations of the sky and computational simulations go hand in hand, as each informs the other. Cosmological surveys are becoming ever more complex as telescopes reach deeper into space and time, mapping out the distributions of galaxies at farther and farther distances, at earlier epochs of the evolution of the universe.
The very nature of cosmology precludes carrying out controlled lab experiments, so scientists rely instead on simulations to provide a unique way to create a virtual cosmological laboratory. “The simulations that we run are a backbone for the different kinds of science that can be done experimentally, such as the large-scale experiments at different telescope facilities around the world,” said Argonne cosmologist Katrin Heitmann. “We talk about building the ‘universe in the lab,’ and simulations are a huge component of that.”
Not just any computer is up to the immense challenge of generating and dealing with datasets that can exceed many petabytes a day, according to Heitmann. “You really need high-performance supercomputers that are capable of not only capturing the dynamics of trillions of different particles, but also doing exhaustive analysis on the simulated data,” she said. “And sometimes, it’s advantageous to run the simulation and do the analysis on different machines.”
Typically, cosmological simulations can only output a fraction of the frames of the computational movie as it is running because of data storage restrictions. In this case, Argonne sent every data frame to NCSA as soon it was generated, allowing Heitmann and her team to greatly reduce the storage demands on the ALCF file system. “You want to keep as much data around as possible,” Heitmann said. “In order to do that, you need a whole computational ecosystem to come together: the fast data transfer, having a good place to ultimately store that data and being able to automate the whole process.”
In particular, Argonne transferred the data produced immediately to Blue Waters for analysis. The first challenge was to set up the transfer to sustain the bandwidth of one petabyte per day.
Once Blue Waters performed the first pass of data analysis, it reduced the raw data – with high fidelity – into a manageable size. At that point, researchers sent the data to a distributed repository at Argonne, the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. Cosmologists can access and further analyze the data through a system built by researchers in Argonne’s Mathematics and Computer Science Division in collaboration with Argonne’s High Energy Physics Division.
Argonne and University of Illinois built one such central repository on the Supercomputing ’16 conference exhibition floor in November 2016, with memory units supplied by DDN Storage. The data moved over 1,400 miles to the conference’s SciNet network. The link between the computers used high-speed networking through the Department of Energy’s Energy Science Network (ESnet). Researchers sought, in part, to take full advantage of the fast SciNET infrastructure to do real science; typically it is used for demonstrations of technology rather than solving real scientific problems.
External data movement at high speeds significantly impacts a supercomputer’s performance,” said Brandon George, systems engineer at DDN Storage. “Our solution addresses that issue by building a self-contained data transfer node with its own high-performance storage that takes in a supercomputer’s results and the responsibility for subsequent data transfers of said results, leaving supercomputer resources free to do their work more efficiently.”
The full experiment ran successfully for 24 hours without interruption and led to a valuable new cosmological data set that Heitmann and other researchers started to analyze on the SC16 show floor.
Argonne senior computer scientist Franck Cappello, who led the effort, likened the software workflow that the team developed to accomplish these goals to an orchestra. In this “orchestra,” Cappello said, the software connects individual sections, or computational resources, to make a richer, more complex sound.
He added that his collaborators hope to improve the performance of the software to make the production and analysis of extreme-scale scientific data more accessible. “The SWIFT workflow environment and the Globus file transfer service were critical technologies to provide the effective and reliable orchestration and the communication performance that were required by the experiment,” Cappello said.
The idea is to have data centers like we have for the commercial cloud. They will hold scientific data and will allow many more people to access and analyze this data, and develop a better understanding of what they’re investigating,” said Cappello, who also holds an affiliate position at NCSA and serves as director of the international Joint Laboratory on Extreme Scale Computing, based in Illinois. “In this case, the focus was cosmology and the universe. But this approach can aid scientists in other fields in reaching their data just as well.”
Argonne computer scientist Rajkumar Kettimuthu and David Wheeler, lead network engineer at NCSA, were instrumental in establishing the configuration that actually reached this performance. Maxine Brown from University of Illinois provided the Sage environment to display the analysis result at extreme resolution. Justin Wozniak from Argonne developed the whole workflow environment using SWIFT to orchestrate and perform all operations.
Source: Argonne