Cori Supercomputer Bids NERSC and HPC Community Adieu

After nearly seven years of service, thousands of user projects, and tens of billions of compute hours, the Cori supercomputer at the National Energy Research Scientific Computing Center (NERSC) will be retired at the end of May. With its first cabinets installed in 2015 and the system fully deployed by 2016, Cori has been in […]

TACC: Simulation Reveals Secrets of Exotic Electrons

March 22, 2023 — The Texas Advanced Computing Center has announced that simulations on TACC’s Frontera supercomputer have helped scientists map for the first time the conditions that characterize exotic electrons, called polarons, in 2D materials, the thinnest materials ever been made. “A new leaf has turned in scientists’ hunt for developing cutting-edge materials used in […]

Perlmutter supercomputer to include more than 6000 NVIDIA A100 processors

NERSC is among the early adopters of the new NVIDIA A100 Tensor Core GPU processor announced by NVIDIA this week. More than 6,000 of the A100 chips will be included in NERSC’s next-generation Perlmutter system, which is based on an HPE Cray Shasta supercomputer that will be deployed at Lawrence Berkeley National Laboratory later this year. “Nearly half of the workload running at NERSC is poised to take advantage of GPU acceleration, and NERSC, HPE, and NVIDIA have been working together over the last two years to help the scientific community prepare to leverage GPUs for a broad range of research workloads.”

Supercomputing the Expansion of Wind Power

Researchers are using TACC supercomputers to map out a path towards growing wind power as an energy source in the United States. “This research is the first detailed study designed to develop scenarios for how wind energy can expand from the current levels of seven percent of U.S. electricity supply to achieve the 20 percent by 2030 goal outlined by the U.S. Department of Energy National Renewable Energy Laboratory (NREL) in 2014.”

NERSC Supercomputer to Help Fight Coronavirus

“NERSC is a member of the COVID-19 High Performance Computing Consortium. In support of the Consortium, NERSC has reserved a portion of its Director’s Discretionary Reserve time on Cori, a Cray XC40 supercomputer, to support COVID-19 research efforts. The GPU partition on Cori was installed to help prepare applications for the arrival of Perlmutter, NERSC’s next-generation system that is scheduled to begin arriving later this year and will rely on GPUs for much of its computational power.”

Video: Managing large-scale cosmology simulations with Parsl and Singularity

Rick Wagner from Globus gave this talk at the Singularity User Group “We package the imSim software inside a Singularity container so that it can be developed independently, packaged to include all dependencies, trivially scaled across thousands of computing nodes, and seamlessly moved between computing systems. To date, the simulation workflow has consumed more than 30M core hours using 4K nodes (256K cores) on Argonne’s Theta supercomputer and 2K nodes (128K cores) on NERSC’s Cori supercomputer.”

ClimateNet Looks to Machine Learning for Global Climate Science

Pattern recognition tasks such as classification, localization, object detection and segmentation have remained challenging problems in the weather and climate sciences. Now, a team at the Lawrence Berkeley National Laboratory is developing ClimateNet, a project that will bring the power of deep learning methods to identify important weather and climate patterns via expert-labeled, community-sourced open datasets and architectures.

Reconstructing Nuclear Physics Experiments with Supercomputers

For the first time, scientists have used HPC to reconstruct the data collected by a nuclear physics experiment—an advance that could dramatically reduce the time it takes to make detailed data available for scientific discoveries. “By running multiple computing jobs simultaneously on the allotted supercomputing cores, the team transformed 4.73 petabytes of raw data into 2.45 petabytes of “physics-ready” data in a fraction of the time it would have taken using in-house high-throughput computing resources, even with a two-way transcontinental data journey.”

Boosting Manycore Code Optimization Efforts with Roofline Technology

A software toolkit developed at Berkeley Lab to better understand supercomputer performance is now being used to boost application performance for researchers running codes at NERSC and other supercomputing facilities. “Since its initial development, what is now known as the Empirical Roofline Toolkit (ERT) has benefitted from contributions by several Berkeley Lab staff. Along the way, HPC users who write scientific applications for manycore systems have been able to apply the toolkit to their applications and see how changing parameters of their code can improve performance.”

Berkeley Lab Tunes NWChem for Intel Xeon Phi Processor

A team of researchers at Berkeley Lab, PNNL, and Intel are working hard to make sure that computational chemists are prepared to compute efficiently on next-generation exascale machines. Recently, they achieved a milestone, successfully adding thread-level parallelism on top of MPI-level parallelism in the planewave density functional theory method within the popular software suite NWChem. “Planewave codes are useful for solution chemistry and materials science; they allow us to look at the structure, coordination, reactions and thermodynamics of complex dynamical chemical processes in solutions and on surfaces.”