El Capitan, the National Nuclear Security Administration’s first exascale supercomputer, will likely be the world’s most powerful computer when the system is deployed in Fall 2024. But answering some of science’s most challenging questions will go beyond simply having cutting-edge hardware. Discover how the software architecture and storage systems that will drive El Capitan’s performance […]
How Will El Capitan Run? Software and Storage Solutions Powering NNSA’s First Exascale Supercomputer
Podcast: Spack Helps Automate Deployment of Supercomputer Software
In this Let’s Talk Exascale podcast, Todd Gamblin from LLNL describes how the Spack flexible package manager helps automate the deployment of software on supercomputer systems. “After many hours building software on Lawrence Livermore’s supercomputers, in 2013 Todd Gamblin created the first prototype of a package manager he named Spack (Supercomputer PACKage manager). The tool caught on, and development became a grassroots effort as colleagues began to use the tool.”
Podcast: Software Deployment and Continuous Integration for Exascale
In this Let’s Talk Exascale podcast, Ryan Adamson from Oak Ridge National Laboratory describes how his role at the Exascale Computing Project revolves around software deployment and continuous integration at DOE facilities. “Each of the scientific applications that we have depends on libraries and underlying vendor software,” Adamson said. “So managing dependencies and versions of all of these different components can be a nightmare.”
Video: Managing HPC Software Complexity with Spack
Greg Becker from LLNL gave this talk at the MVAPICH User Group. “Spack is an open-source package manager for HPC. This presentation will give an overview of Spack, including recent developments and a number of items on the near-term roadmap. We will focus on Spack features relevant to the MVAPICH community; these include Spack’s virtual package abstraction, which is used for API-compatible libraries including MPI implementations, package level compiler wrappers, and packages which modify other package’s build environments.”
Spack – A Package Manager for HPC
Todd Gamblin from LLNL gave this talk at the Stanford HPC Conference. “Spack is a package manager for cluster users, developers and administrators. Rapidly gaining popularity in the HPC community, like other HPC package managers, Spack was designed to build packages from source. This talk will introduce some of the open infrastructure for distributing packages, challenges to providing binaries for a large package ecosystem and what we’re doing to address problems.”
Binary Packaging for HPC with Spack
Todd Gamblin from LLNL gave this talk at FOSDEM’18. “This talk will introduce binary packaging in Spack and some of the open infrastructure we have planned for distributing packages. We’ll talk about challenges to providing binaries for a combinatorially large package ecosystem, and what we’re doing in Spack to address these problems. We’ll also talk about challenges for implementing relocatable binaries with a multi-compiler system like Spack. “
SPACK: A Package Manager for Supercomputers, Linux, and MacOS
“HPC software is becoming increasingly complex. The space of possible build configurations is combinatorial, and existing package management tools do not handle these complexities well. Because of this, most HPC software is built by hand. This talk introduces “Spack”, an open-source tool for scientific package management which helps developers and cluster administrators avoid having to waste countless hours porting and rebuilding software.” A tutorial video on using Spack is also included.
RCE Podcast: Spack Package Management Tool
“Spack is designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputing centers, where many users and application teams share common installations of software on clusters with exotic architectures, using libraries that do not have a standard ABI. Spack is non-destructive: installing a new version does not break existing installations, so many configurations can coexist on the same system.”