Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NERSC Computer Scientist wins First Corones Award

Today the Krell Institute announced that Rebecca Hartman-Baker, a computer scientist at the Department of Energy’s (DOE’s) National Energy Research Scientific Computing Center (NERSC), is the inaugural recipient of the James Corones Award in Leadership, Community Building and Communication. “Hartman-Baker leads the User Engagement Group at NERSC, a DOE Office of Science user facility based at Lawrence Berkeley National Laboratory. A selection committee representing the DOE national laboratories, academia and Krell cited Hartman-Baker’s “broad impact on HPC training; her hands-on approach to building a diverse and inclusive HPC user community, particularly among students and early-career computational scientists; and her mastery in communicating the excitement and potential of computational science.”

ISC 2019 Recap from Glenn Lockwood

In this special guest feature, Glenn Lockwood from NERSC shares his impressions of ISC 2019 from an I/O perspective. “I was fortunate enough to attend the ISC HPC conference this year, and it was a delightful experience from which I learned quite a lot. For the benefit of anyone interested in what they have missed, I took the opportunity on the eleven-hour flight from Frankfurt to compile my notes and thoughts over the week.”

GPU Hackathon gears up for Future Perlmutter Supercomputer

NERSC recently hosted its first user hackathon to begin preparing key codes for the next-generation architecture of the Perlmutter system. Over four days, experts from NERSC, Cray, and NVIDIA worked with application code teams to help them gain new understanding of the performance characteristics of their applications and optimize their codes for the GPU processors in Perlmutter. “By starting this process early, the code teams will be well prepared for running on GPUs when NERSC deploys the Perlmutter system in 2020.”

Video: Exascale Deep Learning for Climate Analytics

Thorsten Kurth Josh Romero gave this talk at the GPU Technology Conference. “We’ll discuss how we scaled the training of a single deep learning model to 27,360 V100 GPUs (4,560 nodes) on the OLCF Summit HPC System using the high-productivity TensorFlow framework. This talk is targeted at deep learning practitioners who are interested in learning what optimizations are necessary for training their models efficiently at massive scale.”

CosmoGAN Neural Network to Study Dark Matter

As cosmologists and astrophysicists delve deeper into the darkest recesses of the universe, their need for increasingly powerful observational and computational tools has expanded exponentially. From facilities such as the Dark Energy Spectroscopic Instrument to supercomputers like Lawrence Berkeley National Laboratory’s Cori system at NERSC, they are on a quest to collect, simulate, and analyze […]

Video: Simulations of Antarctic Meltdown should send chills on Earth Day

In this video, researchers investigate the millennial-scale vulnerability of the Antarctic Ice Sheet (AIS) due solely to the loss of its ice shelves. Starting at the present-day, the AIS evolves for 1000 years, exposing the floating ice shelves to an extreme thinning rate, which results in their complete collapse. The visualizations show the first 500 […]

NERSC taps NVIDIA compiler team for Perlmutter Supercomputer

NERSC has signed a contract with NVIDIA to enhance GPU compiler capabilities for Berkeley Lab’s next-generation Perlmutter supercomputer. “We are excited to work with NVIDIA to enable OpenMP GPU computing using their PGI compilers,” said Nick Wright, the Perlmutter chief architect. “Many NERSC users are already successfully using the OpenMP API to target the manycore architecture of the NERSC Cori supercomputer. This project provides a continuation of our support of OpenMP and offers an attractive method to use the GPUs in the Perlmutter supercomputer. We are confident that our investment in OpenMP will help NERSC users meet their application performance portability goals.”

ClimateNet Looks to Machine Learning for Global Climate Science

Pattern recognition tasks such as classification, localization, object detection and segmentation have remained challenging problems in the weather and climate sciences. Now, a team at the Lawrence Berkeley National Laboratory is developing ClimateNet, a project that will bring the power of deep learning methods to identify important weather and climate patterns via expert-labeled, community-sourced open datasets and architectures.

NERSC Hosts GPU Hackathon in Preparation for Perlmutter Supercomputer

NERSC recently hosted a successful GPU Hackathon event in preparation for their next-generation Perlmutter supercomputer. Perlmutter, a pre-exascale Cray Shasta system slated to be delivered in 2020, will feature a number of new hardware and software innovations and is the first supercomputing system designed with both data analysis and simulations in mind. Unlike previous NERSC systems, Perlmutter will use a combination of nodes with only CPUs, as well as nodes featuring both CPUs and GPUs.

Moving Mountains of Data at NERSC

Researchers at NERSC face the daunting task of moving 43 years worth of archival data across the network to new tape libraries, a whopping 120 Petabytes! “Even with all of this in place, it will still take about two years to move 43 years’ worth of NERSC data. Several factors contribute to this lengthy copy operation, including the extreme amount of data to be moved and the need to balance user access to the archive.”