The U.S. National Energy Research Scientific Computing Center today unveiled the Perlmutter HPC system, a beast of a machine powered by 6,159 Nvidia A100 GPUs and delivering 4 exaflops of mixed precision performance. Perlmutter is based on the HPE Cray Shasta platform, including Slingshot interconnect, a heterogeneous system with both GPU-accelerated and CPU-only nodes. The system […]
Super-connected HPC: Superfacility Links National Lab Research Sites
By Mike May, on behalf of DEIXIS: Computational Science at the National Laboratories High performance computing (HPC) is only as valuable as the science it produces. To that end, a National Energy Research Scientific Computing Center (NERSC) project at Lawrence Berkeley National Laboratory has been expanding its reach through a superfacility – “an experimental facility […]
NERSC, ALCF, Codeplay Partner on SYCL GPU Compiler
The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL) and Argonne Leadership Computing Facility (ALCF) are working with Codeplay Software to enhance the LLVM SYCL GPU compiler capabilities for Nvidia A100 GPUs. The collaboration is designed to help NERSC and ALCF users, along with the HPC community in general, produce […]
The Hyperion-insideHPC Interviews: NERSC’s Jeff Broughton on the End of the Top500 and Exascale Begetting Petaflops in a Rack
NERSC’s Jeff Broughton career extends back to HPC ancient times (1979) when, fresh out of college, he was promoted to a project management role at Lawrence Livermore National Laboratory – a big job for a young man. Broughton has taken on big jobs in the ensuing 40 years. In this interview, he talks about such […]
The Hyperion-insideHPC Interviews: Los Alamos’ Gary Grider Argues Efficiency in HPC Is King and Laments Simulation’s ‘Raw Deal’
It might surprise you to know that Gary Grider, HPC Division Leader at Los Alamos National Laboratory, is less interested in FLOPS than efficiency. In this interview, he explains why “FLOPS” hasn’t appeared in Los Alamos RFPs over the last decade. He also talks about his greatest HPC concern: decreasing interest in classical simulation in […]
Job of the Week: HPC Storage Infrastructure Engineer at NERSC
NERSC is seeking an HPC Storage Infrastructure Engineer for its Storage Systems Group. This group is responsible for architecting, deploying, and supporting the high-performance parallel storage systems relied upon by NERSC’s 7,000 scientific users to conduct basic scientific research across a wide range of disciplines. “The HPC Storage Infrastructure Engineer will work closely with approximately eight other storage systems and software engineers in this group to support and optimize hundreds of petabytes of parallel storage that is served to thousands of clients at terabytes per second.”
Perlmutter supercomputer to include more than 6000 NVIDIA A100 processors
NERSC is among the early adopters of the new NVIDIA A100 Tensor Core GPU processor announced by NVIDIA this week. More than 6,000 of the A100 chips will be included in NERSC’s next-generation Perlmutter system, which is based on an HPE Cray Shasta supercomputer that will be deployed at Lawrence Berkeley National Laboratory later this year. “Nearly half of the workload running at NERSC is poised to take advantage of GPU acceleration, and NERSC, HPE, and NVIDIA have been working together over the last two years to help the scientific community prepare to leverage GPUs for a broad range of research workloads.”
NERSC Finalizes Contract for Perlmutter Supercomputer
NERSC has moved another step closer to making Perlmutter — its next-generation GPU-accelerated supercomputer — available to the science community in 2020. In mid-April, NERSC finalized its contract with Cray — which was acquired by Hewlett Packard Enterprise (HPE) in September 2019 — for the new system, a Cray Shasta supercomputer that will feature 24 […]
Video: Why Supercomputers Are A Vital Tool In The Fight Against COVID-19
In this video from Forbes, Horst Simon from LBNL describes how supercomputers are being used for coronavirus research. “Computing is stepping up to the fight in other ways too. Some researchers are crowdsourcing computing power to try to better understand the dynamics of the protein and a dataset of 29,000 research papers has been made available to researchers leveraging artificial intelligence and other approaches to help tackle the virus. IBM has launched a global coding challenge that includes a focus on COVID-19 and Amazon has said it will invest $20 million to help speed up coronavirus testing.”
NERSC Rolls Out New Community File System for Next-Gen HPC
NERSC recently unveiled their new Community File System (CFS), a long-term data storage tier developed in collaboration with IBM that is optimized for capacity and manageability. “In the next few years, the explosive growth in data coming from exascale simulations and next-generation experimental detectors will enable new data-driven science across virtually every domain. At the same time, new nonvolatile storage technologies are entering the market in volume and upending long-held principles used to design the storage hierarchy.”