Video: Introducing the 125 Petaflop Sierra Supercomputer

Print Friendly, PDF & Email

In this video, researchers from Lawrence Livermore National Laboratory describe Sierra, LLNL’s next-generation supercomputer. Sierra will provide computational resources that are essential for nuclear weapon scientists to fulfill the National Nuclear Security Administration’s stockpile stewardship mission through simulation in lieu of underground testing.

“The IBM-built advanced technology high-performance system is projected to provide four to six times the sustained performance and be at least seven times more powerful than LLNL’s current most advanced system, Sequoia, with a 125 petaFLOP/s peak. At approximately 11 megawatts, Sierra will also be about five times more power efficient than Sequoia. By combining two types of processor chips—IBM’s Power 9 processors and NVIDIA’s Volta GPUs—Sierra is designed for more efficient overall operations and is expected to be a promising architecture for extreme-scale computing.’

The new system is part of the CORAL (Collaboration of Oak Ridge, Argonne, and Livermore) procurement, a first-of-its-kind collaboration between ORNL, Argonne, and LLNL that culminated in three pre-exascale high performance computing systems to be delivered in the 2017 timeframe. CORAL was established by DOE to leverage supercomputing investments, to streamline procurement processes, and to reduce the costs to develop supercomputers.

The design for Sierra uses IBM Power architecture processors connected by NVLink to NVIDIA Volta graphics processing units (GPUs). NVLink is an interconnect bus that provides higher performance than the traditional Peripheral Component Interconnect Express for attaching hardware devices in a computer, allowing coherent direct access to GPU and memory. The machine will be connected with a Mellanox InfiniBand network using a fat-tree topology—a versatile network design that can be tailored to work efficiently with the bandwidth available.

Sierra is expected to be fully installed and accepted in Fiscal Year 2018.

Sign up for our insideHPC Newsletter