Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

ARM goes Big: HPE Builds Petaflop Supercomputer for Sandia

Today HPE announced plans to deliver the world’s largest Arm supercomputer. As part of the Vanguard program, Astra, the new Arm-based system, will be used by the NNSA to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science. “By introducing Arm processors with the HPE Apollo 70, a purpose-built HPC architecture, we are bringing powerful elements, like optimal memory performance and greater density, to supercomputers that existing technologies in the market cannot match,” said Mike Vildibill, vice president, Advanced Technologies Group, HPE.

Behind the Scenes – HPC at Sandia

In this video, Sandia engineers provide a behind-the-scenes look at the lab’s efforts centered around High Performance Computing. “The Sandia team supports researchers who solve critical national and global problems – a challenging job with high impact results.Our unique mission responsibilities in the nuclear weapons program create a foundation from which we leverage capabilities, enabling us to solve complex national security problems.”

Exascale Computing Project Selects Co-Design Center for Graph Analytics

The Exascale Computing Project (ECP) has selected its fifth Co-Design Center to focus on Graph Analytics — combinatorial (graph) kernels that play a crucial enabling role in many data analytic computing application areas as well as several ECP applications. Initially, the work will be a partnership among PNNL, Lawrence Berkeley National Laboratory, Sandia National Laboratories, and Purdue University.

Exascale Computing Project Announces $48 Million to Establish Four Exascale Co-Design Centers

Today the Department of Energy’s Exascale Computing Project (ECP) today announced that it has selected four co-design centers as part of a 4 year, $48 million funding award. The first year is funded at $12 million, and is to be allocated evenly among the four award recipients. “By targeting common patterns of computation and communication, known as “application motifs”, we are confident that these ECP co-design centers will knock down key performance barriers and pave the way for applications to exploit all that capable exascale has to offer.”

Argonne to Develop Applications for ECP Exascale Computing Project

Today Argonne announced that the Lab is leading a pair of newly funded applications projects for the Exascale Computing Project (ECP). The announcement comes on the heels of news that ECP has funded a total of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations.

The Challenges and Rewards of Stockpile Stewardship

Charles W. Nakhleh from LANL presented this talk at the 2016 DOE NNSA SSGF Annual Program Review. “This talk will explore some of the future opportunities and exciting scientific and technological challenges in the National Nuclear Security Administration Stockpile Stewardship Program. The program’s objective is to ensure that the nation’s nuclear deterrent remains safe, secure and effective. Meeting that objective requires sustained excellence in a variety of scientific and engineering disciplines and has led to remarkable advances in theory, experiment and simulation.”

NNSA Unleashes Advanced Computing Capabilities to Serve Researchers at Three National Labs

NNSA’s next-generation Penguin Computing clusters based on Intel SSF are bolstering “capacity” computing capability at the Tri Labs. “With CTS1 installed in April, the NNSA scientists can continue their stewardship research and management on some of the most advanced commodity clusters the Tri Labs have acquired, ensuring the safety, security, and reliability of the nation’s nuclear stockpile.”

Interview: Penguin Computing Lands Biggest Open Compute Contract Ever for HPC

The Open Compute Project got a major endorsement in the HPC space news of NNSA’s pending deployment of Tundra clusters from Penguin Computing. To learn more, we caught up with Dan Dowling, Penguin’s VP of Engineering Services.

Penguin Computing to Build 7-9 Petaflops of Open Compute Clusters for NNSA

Today the National Nuclear Security Administration (NNSA) announced a contract with Penguin Computing for a set of large-scale Open Compute HPC clusters. With 7-to-9 Petaflops of aggregate peak performance, the systems will be installed as part of NNSA’s tri-laboratory Commodity Technology Systems program. Scheduled for installation starting next year, the systems will bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories.

Video: Looking to the Future of NNSA Supercomputing

In this video, Douglas P. Wade from NNSA describes the computational challenges the agency faces in the stewardship of the nation’s nuclear stockpile. As the Acting Director of the NNSA Office of Advanced Simulation and Computing, Wade looks ahead to future systems on the road to exascale computing.