Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Supercomputing and the Scientist: How HPC and Analytics are transforming experimental science

In this video from DataTech19, Debbie Bard from NERSC presents: Supercomputing and the scientist: How HPC and large-scale data analytics are transforming experimental science. “Debbie Bard leads the Data Science Engagement Group NERSC. NERSC is the mission supercomputing center for the USA Department of Energy, and supports over 7000 scientists and 700 projects with supercomputing needs.”

Supercomputing Post-Wildfire Water Availability

A new study by scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) uses a numerical model of an important watershed in California to shed light on how wildfires can affect large-scale hydrological processes, such as stream flow, groundwater levels, and snowpack and snowmelt. The team found that post-wildfire conditions resulted in greater winter snowpack and subsequently greater summer runoff as well as increased groundwater storage.

HPC Innovation Excellence Award Showcases Physics-based Scientific Discovery

A collaboration that includes researchers from NERSC was recently honored with an HPC Innovation Excellence Award for their work on “Physics-Based Unsupervised Discovery of Coherent Structures in Spatiotemporal Systems.” The award was presented in June by Hyperion Research during the ISC19 meeting in Frankfurt, Germany.

Designing Future Flash Storage Systems for HPC and Beyond

Glenn Lockwood from NERSC gave this talk at the Samsung Forum. “In this talk, we will compare the storage and I/O requirements of large-scale HPC workloads with those of the cloud and show how HPC’s unique requirements have led NERSC to deploy NVMe in the form of burst buffers and all-flash parallel file systems rather than block- and object-based storage. We will then explore how recent technological advances that target enterprise and cloud I/O workloads may also benefit HPC, and we will highlight a few remaining challenge areas in which innovation is required.”

NERSC Computer Scientist wins First Corones Award

Today the Krell Institute announced that Rebecca Hartman-Baker, a computer scientist at the Department of Energy’s (DOE’s) National Energy Research Scientific Computing Center (NERSC), is the inaugural recipient of the James Corones Award in Leadership, Community Building and Communication. “Hartman-Baker leads the User Engagement Group at NERSC, a DOE Office of Science user facility based at Lawrence Berkeley National Laboratory. A selection committee representing the DOE national laboratories, academia and Krell cited Hartman-Baker’s “broad impact on HPC training; her hands-on approach to building a diverse and inclusive HPC user community, particularly among students and early-career computational scientists; and her mastery in communicating the excitement and potential of computational science.”

ISC 2019 Recap from Glenn Lockwood

In this special guest feature, Glenn Lockwood from NERSC shares his impressions of ISC 2019 from an I/O perspective. “I was fortunate enough to attend the ISC HPC conference this year, and it was a delightful experience from which I learned quite a lot. For the benefit of anyone interested in what they have missed, I took the opportunity on the eleven-hour flight from Frankfurt to compile my notes and thoughts over the week.”

GPU Hackathon gears up for Future Perlmutter Supercomputer

NERSC recently hosted its first user hackathon to begin preparing key codes for the next-generation architecture of the Perlmutter system. Over four days, experts from NERSC, Cray, and NVIDIA worked with application code teams to help them gain new understanding of the performance characteristics of their applications and optimize their codes for the GPU processors in Perlmutter. “By starting this process early, the code teams will be well prepared for running on GPUs when NERSC deploys the Perlmutter system in 2020.”

Video: Exascale Deep Learning for Climate Analytics

Thorsten Kurth Josh Romero gave this talk at the GPU Technology Conference. “We’ll discuss how we scaled the training of a single deep learning model to 27,360 V100 GPUs (4,560 nodes) on the OLCF Summit HPC System using the high-productivity TensorFlow framework. This talk is targeted at deep learning practitioners who are interested in learning what optimizations are necessary for training their models efficiently at massive scale.”

CosmoGAN Neural Network to Study Dark Matter

As cosmologists and astrophysicists delve deeper into the darkest recesses of the universe, their need for increasingly powerful observational and computational tools has expanded exponentially. From facilities such as the Dark Energy Spectroscopic Instrument to supercomputers like Lawrence Berkeley National Laboratory’s Cori system at NERSC, they are on a quest to collect, simulate, and analyze […]

Video: Simulations of Antarctic Meltdown should send chills on Earth Day

In this video, researchers investigate the millennial-scale vulnerability of the Antarctic Ice Sheet (AIS) due solely to the loss of its ice shelves. Starting at the present-day, the AIS evolves for 1000 years, exposing the floating ice shelves to an extreme thinning rate, which results in their complete collapse. The visualizations show the first 500 […]