Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Galactos Project: Using HPC To Run One of Cosmology’s Hardest Challenges

Debbie Bard from NERSC gave this talk at the HPC User Forum. “We present Galactos, a high performance implementation of a novel, O(N^2 ) algorithm that uses a load-balanced k-d tree and spherical harmonic expansions to compute the anisotropic 3PCF. Our implementation is optimized for the Intel Xeon Phi architecture, exploiting SIMD parallelism, instruction and thread concurrency, and signicant L1 and L2 cache reuse, reaching 39% of peak performance on a single node. Galactos scales to the full Cori system, achieving 9.8 PF (peak) and 5.06 PF (sustained) across 9636 nodes, making the 3PCF easily computable for all galaxies in the observable universe.”

Deep Learning at Scale for Cosmology Research

In this video from Google I/O 2018, Debbie Bard from NERSC describes Deep Learning at scale for cosmology research. “Debbie Bard is acting group lead for the Data Science Engagement Group at the National Energy Research Scientific Computing Center (NERSC) at Berkeley National Lab. A native of the UK, her career spans research in particle physics, cosmology and computing on both sides of the Atlantic.”

Video: Addressing Key Science Challenges with Adversarial Neural Networks

Wahid Bhimji from NERSC gave this talk at the 2018 HPC User Forum in Tucson. “Machine Learning and Deep Learning are increasingly used to analyze scientific data, in fields as diverse as neuroscience, climate science and particle physics. In this page you will find links to examples of scientific use cases using deep learning at NERSC, information about what deep learning packages are available at NERSC, and details of how to scale up your deep learning code on Cori to take advantage of the compute power available from Cori’s KNL nodes.”

Reconstructing Nuclear Physics Experiments with Supercomputers

For the first time, scientists have used HPC to reconstruct the data collected by a nuclear physics experiment—an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries. “By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed 4.73 petabytes of raw data into 2.45 petabytes of “physics-ready” data in a fraction of the time it would have taken using in-house high-throughput computing resources, even with a two-way transcontinental data journey.”

Hayward Fault Earthquake Simulations Increase Fidelity of Ground Motions

Researchers at LLNL are using supercomputers to simulate the onset of earthquakes in California. “This study shows that powerful supercomputing can be used to calculate earthquake shaking on a large, regional scale with more realism than we’ve ever been able to produce before,” said Artie Rodgers, LLNL seismologist and lead author of the paper.”

Video: Deep Learning for Science

Prabhat from NERSC and Michael F. Wehner from LBNL gave this talk at the Intel HPC Developer Conference in Denver. “Deep Learning has revolutionized the fields of computer vision, speech recognition and control systems. Can Deep Learning (DL) work for scientific problems? This talk will explore a variety of Lawrence Berkeley National Laboratory’s applications that are currently benefiting from DL.”

Speeding Data Transfer with ESnet’s Petascale DTN Project

Researchers at the DOE are looking to dramatically increase their data transfer capabilities with the Petascale DTN project. “The collaboration, named the Petascale DTN project, also includes the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, a leading center funded by the National Science Foundation (NSF). Together, the collaboration aims to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets.”

Supercomputing Earthquakes in the Age of Exascale

Tomorrow’s exascale supercomputers will enable researchers to accurately simulate the ground motions of regional earthquakes quickly and in unprecedented detail. “Simulations of high frequency earthquakes are more computationally demanding and will require exascale computers,” said David McCallen, who leads the ECP-supported effort. “Ultimately, we’d like to get to a much larger domain, higher frequency resolution and speed up our simulation time.”

NERSC lends a hand to 2017 Tapia Conference on Diversity in Computing

The recent Tapia Conference on Diversity in Computing in Atlanta brought together some 1,200 undergraduate and graduate students, faculty, researchers and professionals in computing from diverse backgrounds and ethnicities to learn from leading thinkers, present innovative ideas and network with peers.

Sowing Seeds of Quantum Computation at Berkeley Lab

“Berkeley Lab’s tradition of team science, as well as its proximity to UC Berkeley and Silicon Valley, makes it an ideal place to work on quantum computing end-to-end,” says Jonathan Carter, Deputy Director of Berkeley Lab Computing Sciences. “We have physicists and chemists at the lab who are studying the fundamental science of quantum mechanics, engineers to design and fabricate quantum processors, as well as computer scientists and mathematicians to ensure that the hardware will be able to effectively compute DOE science.”