The Exascale Computing Project (ECP) has selected its fifth Co-Design Center to focus on Graph Analytics — combinatorial (graph) kernels that play a crucial enabling role in many data analytic computing application areas as well as several ECP applications. Initially, the work will be a partnership among PNNL, Lawrence Berkeley National Laboratory, Sandia National Laboratories, and Purdue University.
Today the Department of Energy’s Exascale Computing Project (ECP) today announced that it has selected four co-design centers as part of a 4 year, $48 million funding award. The first year is funded at $12 million, and is to be allocated evenly among the four award recipients. “By targeting common patterns of computation and communication, known as “application motifs”, we are confident that these ECP co-design centers will knock down key performance barriers and pave the way for applications to exploit all that capable exascale has to offer.”
Today Argonne announced that the Lab is leading a pair of newly funded applications projects for the Exascale Computing Project (ECP). The announcement comes on the heels of news that ECP has funded a total of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations.
Charles W. Nakhleh from LANL presented this talk at the 2016 DOE NNSA SSGF Annual Program Review. “This talk will explore some of the future opportunities and exciting scientific and technological challenges in the National Nuclear Security Administration Stockpile Stewardship Program. The program’s objective is to ensure that the nation’s nuclear deterrent remains safe, secure and effective. Meeting that objective requires sustained excellence in a variety of scientific and engineering disciplines and has led to remarkable advances in theory, experiment and simulation.”
NNSA’s next-generation Penguin Computing clusters based on Intel SSF are bolstering “capacity” computing capability at the Tri Labs. “With CTS1 installed in April, the NNSA scientists can continue their stewardship research and management on some of the most advanced commodity clusters the Tri Labs have acquired, ensuring the safety, security, and reliability of the nation’s nuclear stockpile.”
The Open Compute Project got a major endorsement in the HPC space news of NNSA’s pending deployment of Tundra clusters from Penguin Computing. To learn more, we caught up with Dan Dowling, Penguin’s VP of Engineering Services.
Today the National Nuclear Security Administration (NNSA) announced a contract with Penguin Computing for a set of large-scale Open Compute HPC clusters. With 7-to-9 Petaflops of aggregate peak performance, the systems will be installed as part of NNSA’s tri-laboratory Commodity Technology Systems program. Scheduled for installation starting next year, the systems will bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories.
In this video, Douglas P. Wade from NNSA describes the computational challenges the agency faces in the stewardship of the nation’s nuclear stockpile. As the Acting Director of the NNSA Office of Advanced Simulation and Computing, Wade looks ahead to future systems on the road to exascale computing.
This week Lawrence Livermore National Laboratory broke ground on a modular and sustainable supercomputing facility that will provide a flexible infrastructure able to accommodate the Laboratory’s growing demand for HPC.