Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Managing large-scale cosmology simulations with Parsl and Singularity

In this video from the Singularity User Group, Rick Wagner from Globus presents: Managing large-scale cosmology simulations with Parsl and Singularity.

In preparation for the Large Synoptic Survey Telescope (LSST), we are working with dark energy researchers to simulate images that are similar to the raw exposures that will be generated from the telescope. To do so, we use the imSim software package to create images based on catalogs of astronomical objects and by taking into account effects of the atmosphere, optics, and telescope. In order to produce data comparable to what the LSST will create, we must scale the imSim workflow to process tens of thousands of instance catalogs, each containing millions of astronomical objects, and to simulate the output of the LSST’s 189 LSST’s 189 CCDs, comprising 3.1 gigapixels of imaging data.

To address these needs, we have developed a Parsl-based workflow that coordinates the execution of imSim on input instance catalogs and for each sensor. We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. The Parslworkflow is responsible for processing instance catalogs, determining how to pack simulation workloads onto compute nodes, and orchestrating the invocation of imSim in the Singularity containers deployed to each node.

To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer. The use of Singularity not only enabled efficient scaling and seamless conversion to support other container technologies, but it was also an integral part of our development process. It significantly simplified the complexity of developing and managing the execution of a workflow as part of a multi-institution collaboration and furthermore it removed much of the difficulties associated with execution on heterogeneous supercomputers.

Rick Wagner is a Professional Services Manager at Globus. Most recently, he was the HPC Systems Manager at the San Diego Supercomputer Center and his starting point in research was computational astrophysics.

Check out our insideHPC Events Calendar

Leave a Comment

*

Resource Links: