Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”

AMD to Power Cray Shasta Supercomputer at Navy DSRC

The Department of Defense High Performance Computing Modernization Program (HPCMP) is upgrading its supercomputing capabilities with a new Cray Shasta system powered by AMD EPYC processors. The system, the HPCMP’s first with more than 10 PetaFLOPS of peak computational performance, will be installed at the Navy’s DSRC’s facility at Stennis Space Center, Mississippi and will serve users from all of the services and agencies of the Department.

Podcast: Co-Design for Online Data Analysis and Reduction at Exascale

In this Let’s Talk Exascale podcast, Ian Foster from Argonne National Lab describes how the CODAR project at ECP is addressing the needs for data reduction, analysis, and management in the exascale era. “When compressing data produced by a simulation, the idea is to keep the parts that are scientifically interesting and toss those that are not. However, every application and, perhaps, every scientist, has a different definition of what “interesting” means in that context. So, CODAR has developed a system called Z-checker to enable users to monitor the compression method.”

Video: A Preview of SC20 in Atlanta

In this video, SC20 General Chair Christine E. Cuicchi from the DoD Modernization Program previews the Supercomputing conference coming to Atlanta in November. “Christine Cuicchi is director of the Navy Department of Defense Supercomputing Resource Center (Navy DSRC), operated by the Commander, Naval Oceanography and Meteorology Command (CNMOC). The center provides HPC, storage, networks, and computational expertise which are available to over 2,500 RDT&E, S&T, and acquisition professionals in the DoD.”

Job of the Week: HPC Architect at NVIDIA

NVIDIA is seeking an HPC Architect in our Job of the Week. “NVIDIA is developing processor and system architectures for accelerated high performance computing, machine learning, AI, datacenter and automotive computing. We are looking for an experienced performance architect to join our HPC performance analysis effort. This position offers you the opportunity to make a meaningful impact in a fast-moving, technology focused company.”

NOAA to triple weather and climate supercomputing capacity

The United States is reclaiming a global top spot in high performance computing to support weather and climate forecasts. NOAA, part of the Department of Commerce, today announced a significant upgrade to compute capacity, storage space, and interconnect speed of its Weather and Climate Operational Supercomputing System. This upgrade keeps the agency’s supercomputing capacity on par with other leading weather forecast centers around the world.

Super cooling unit saves water at Sandia HPC data center

A new high efficiency cooling unit installed on the roof of Sandia National Laboratories’ supercomputer center saved 554,000 gallons of water during its first six months of operation last year, says David J. Martinez, engineering project lead for Sandia’s Infrastructure Computing Services. “The dramatic decrease in water use, important for a water-starved state, could be the model for cities and other large users employing a significant amount of water to cool thirsty supercomputer clusters springing up like mushrooms around the country, says Martinez.

LBNL Breaks New Ground in Data Center Optimization

Berkeley Lab has been at the forefront of efforts to design, build, and optimize energy-efficient hyperscale data centers. “In the march to exascale computing, there are real questions about the hard limits you run up against in terms of energy consumption and cooling loads,” Elliott said. “NERSC is very interested in optimizing its facilities to be leaders in energy-efficient HPC.”

GE Research Leverages World’s Top Supercomputer to Boost Jet Engine Efficiency

GE Research has been awarded access to the world’s #1-ranked supercomputer to discover new ways to optimize the efficiency of jet engines and power generation equipment. Michal Osusky, the project’s leader from GE Research’s Thermosciences group, says access to the supercomputer and support team at OLCF will greatly accelerate learning insights for turbomachinery design improvements that lead to more efficient jet engines and power generation assets, stating, “We’re able to conduct experiments at unprecedented levels of speed, depth and specificity that allow us to perceive previously unobservable phenomena in how complex industrial systems operate. Through these studies, we hope to innovate new designs that enable us to propel the state of the art in turbomachinery efficiency and performance.”

UK to invest £1.2 billion for Supercomputing Weather and Climate Science

Today the UK announced plans to invest £1.2 billion for the world’s most powerful weather and climate supercomputer. The government investment will replace Met Office supercomputing capabilities over a 10-year period from 2022 to 2032. The current Met Office Cray supercomputers reach their end of life in late 2022. The first phase of the new supercomputer will increase the Met Office computing capacity by 6-fold alone.”