Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ClimateNet Looks to Machine Learning for Global Climate Science

Pattern recognition tasks such as classification, localization, object detection and segmentation have remained challenging problems in the weather and climate sciences. Now, a team at the Lawrence Berkeley National Laboratory is developing ClimateNet, a project that will bring the power of deep learning methods to identify important weather and climate patterns via expert-labeled, community-sourced open datasets and architectures.

NERSC Hosts GPU Hackathon in Preparation for Perlmutter Supercomputer

NERSC recently hosted a successful GPU Hackathon event in preparation for their next-generation Perlmutter supercomputer. Perlmutter, a pre-exascale Cray Shasta system slated to be delivered in 2020, will feature a number of new hardware and software innovations and is the first supercomputing system designed with both data analysis and simulations in mind. Unlike previous NERSC systems, Perlmutter will use a combination of nodes with only CPUs, as well as nodes featuring both CPUs and GPUs.

Moving Mountains of Data at NERSC

Researchers at NERSC face the daunting task of moving 43 years worth of archival data across the network to new tape libraries, a whopping 120 Petabytes! “Even with all of this in place, it will still take about two years to move 43 years’ worth of NERSC data. Several factors contribute to this lengthy copy operation, including the extreme amount of data to be moved and the need to balance user access to the archive.”

Video: Performance and Productivity in the Big Data Era

In this video from the Intel User Forum at SC18, Prabhat from NERSC presents: Performance and Productivity in the Big Data Era. “At the National Energy Research Scientific Computing Center, HPC and AI converge and advance with Intel technologies. Explore how technologies, trends, and performance optimizations are applied to applications such as CosmoFlow using TensorFlow to help us better understand the universe.”

Looking Back at SC18 and the Road Ahead to Exascale

In this special guest feature from Scientific Computing World, Robert Roe reports on new technology and 30 years of the US supercomputing conference at SC18 in Dallas. “From our volunteers to our exhibitors to our students and attendees – SC18 was inspirational,” said SC18 general chair Ralph McEldowney. “Whether it was in technical sessions or on the exhibit floor, SC18 inspired people with the best in research, technology, and information sharing.”

NERSC: Sierra Snowpack Could Drop Significantly By End of Century

A future warmer world will almost certainly feature a decline in fresh water from the Sierra Nevada mountain snowpack. Now a new study by Berkeley Lab shows how the headwater regions of California’s 10 major reservoirs, representing nearly half of the state’s surface storage, found they could see on average a 79 percent drop in peak snowpack water volume by 2100. “What’s more, the study found that peak timing, which has historically been April 1, could move up by as much as four weeks, meaning snow will melt earlier, thus increasing the time lag between when water is available and when it is most in demand.”

GPU-Powered Perlmutter Supercomputer coming to NERSC in 2020

Today NERSC announced plans for Perlmutter, a pre-exascale system to be installed in 2020. With thousands of NVIDIA Tesla GPUs, the system is expected to deliver three times the computational power currently available on the Cori supercomputer at NERSC. “Optimized for science, the supercomputer will support NERSC’s community of more than 7,000 researchers. These scientists rely on high performance computing to build AI models, run complex simulations and perform data analytics. GPUs can speed up all three of these tasks.”

Video: Tackling Energy Storage Challenges at America’s National Labs

In this video, researchers use NERSC supercomputers to discover new battery materials. “The DOE’s InnovationXLab Energy Storage Summit took place September 18-19, 2018 at the SLAC National Accelerator Laboratory in Silicon Valley. Energy storage is one of the biggest challenges to unlocking the potential from the next generation of transportation and electricity grid technologies. The Summit will showcase the broad array of technical resources available from across DOE’s National Lab complex that can be leveraged by industry to address these challenges.”

Job of the Week: HPC Architecture and Performance Engineer at LBNL

Lawrence Berkeley National Lab is seeking an HPC Architecture and Performance Engineer in our Job of the Week. “Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) has an opening for a Computer Systems Engineer 3. The incumbent will contribute to an on-going Advanced Technology Group (ATG) group effort to develop a complete understanding of the issues that lead to improved application and computer system performance on extreme-scale advanced architectures. As a team member, they will contribute to efforts for NERSC in evaluating existing and emerging High Performance Computing (HPC) systems by analyzing the performance characteristics of leading-edge DOE Office of Science application codes.”

Podcast: Deep Learning for Scientific Data Analysis

In this NERSC News Podcast, Debbie Bard from NERSC describes how Deep Learning can help scientists accelerate their research. “Deep learning is enjoying unprecedented success in a variety of commercial applications, but it is also beginning to find its footing in science. Just a decade ago, few practitioners could have predicted that deep learning-powered systems would surpass human-level performance in computer vision and speech recognition tasks.”