Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Job of the Week: HPC Architecture and Performance Engineer at LBNL

Lawrence Berkeley National Lab is seeking an HPC Architecture and Performance Engineer in our Job of the Week. “Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) has an opening for a Computer Systems Engineer 3. The incumbent will contribute to an on-going Advanced Technology Group (ATG) group effort to develop a complete understanding of the issues that lead to improved application and computer system performance on extreme-scale advanced architectures. As a team member, they will contribute to efforts for NERSC in evaluating existing and emerging High Performance Computing (HPC) systems by analyzing the performance characteristics of leading-edge DOE Office of Science application codes.”

Podcast: Deep Learning for Scientific Data Analysis

In this NERSC News Podcast, Debbie Bard from NERSC describes how Deep Learning can help scientists accelerate their research. “Deep learning is enjoying unprecedented success in a variety of commercial applications, but it is also beginning to find its footing in science. Just a decade ago, few practitioners could have predicted that deep learning-powered systems would surpass human-level performance in computer vision and speech recognition tasks.”

Extreme Scale Deep Learning at NERSC

Thorsten Kurth from LBNL gave this talk at the PASC18 conference. “We present various studies on very large scale distributed deep learning on HPC systems including the ~10k node Intel Xeon-Phi-based Cori system at NERSC. We explore CNN classification architectures and generative adversarial networks for HEP problems using large images corresponding to full LHC detectors and high-resolution cosmology convergence maps.”

The Galactos Project: Using HPC To Run One of Cosmology’s Hardest Challenges

Debbie Bard from NERSC gave this talk at the HPC User Forum. “We present Galactos, a high performance implementation of a novel, O(N^2 ) algorithm that uses a load-balanced k-d tree and spherical harmonic expansions to compute the anisotropic 3PCF. Our implementation is optimized for the Intel Xeon Phi architecture, exploiting SIMD parallelism, instruction and thread concurrency, and signicant L1 and L2 cache reuse, reaching 39% of peak performance on a single node. Galactos scales to the full Cori system, achieving 9.8 PF (peak) and 5.06 PF (sustained) across 9636 nodes, making the 3PCF easily computable for all galaxies in the observable universe.”

Deep Learning at Scale for Cosmology Research

In this video from Google I/O 2018, Debbie Bard from NERSC describes Deep Learning at scale for cosmology research. “Debbie Bard is acting group lead for the Data Science Engagement Group at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic.”

Video: Addressing Key Science Challenges with Adversarial Neural Networks

Wahid Bhimji from NERSC gave this talk at the 2018 HPC User Forum in Tucson. “Machine Learning and Deep Learning are increasingly used to analyze scientific data, in fields as diverse as neuroscience, climate science and particle physics. In this page you will find links to examples of scientific use cases using deep learning at NERSC, information about what deep learning packages are available at NERSC, and details of how to scale up your deep learning code on Cori to take advantage of the compute power available from Cori’s KNL nodes.”

Reconstructing Nuclear Physics Experiments with Supercomputers

For the first time, scientists have used HPC to reconstruct the data collected by a nuclear physics experiment—an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries. “By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed 4.73 petabytes of raw data into 2.45 petabytes of “physics-ready” data in a fraction of the time it would have taken using in-house high-throughput computing resources, even with a two-way transcontinental data journey.”

Hayward Fault Earthquake Simulations Increase Fidelity of Ground Motions

Researchers at LLNL are using supercomputers to simulate the onset of earthquakes in California. “This study shows that powerful supercomputing can be used to calculate earthquake shaking on a large, regional scale with more realism than we’ve ever been able to produce before,” said Artie Rodgers, LLNL seismologist and lead author of the paper.”

Video: Deep Learning for Science

Prabhat from NERSC and Michael F. Wehner from LBNL gave this talk at the Intel HPC Developer Conference in Denver. “Deep Learning has revolutionized the fields of computer vision, speech recognition and control systems. Can Deep Learning (DL) work for scientific problems? This talk will explore a variety of Lawrence Berkeley National Laboratory’s applications that are currently benefiting from DL.”

Speeding Data Transfer with ESnet’s Petascale DTN Project

Researchers at the DOE are looking to dramatically increase their data transfer capabilities with the Petascale DTN project. “The collaboration, named the Petascale DTN project, also includes the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, a leading center funded by the National Science Foundation (NSF). Together, the collaboration aims to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets.”