Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Exascale Computing Project Software Technology Issues Updated Capability Assessment Report

Jan. 15, 2021 — Tthe latest update (V2.5) of the US Department of Energy’s (DOE) Exascale Computing Project (ECP) Software Technology (ST) Capability Assessment Report (CAR) is now available here from the ECP website. It provides an overview and assessment of current ECP ST capabilities and activities, giving stakeholders and the broader high-performance computing (HPC) […]

Update from the Frontier of Exascale Software Development

In advance of the scheduled shipment this year of the U.S.’s first exascale supercomputer, Frontier, at Oak Ridge National Laboratory, an international team of software developers led by a University of Delaware professor is working on a plasma physics application. An article published yesterday in the university’s UDaily by Tracey Bryant details the work underway […]

‘Let’s Talk Exascale’: How Supercomputing Is Shaking Up Earthquake Science

Supercomputing is bringing seismic change to earthquake science. A field that historically has predicted by looking back now is moving forward with HPC and physics-based models to comprehensively simulate the earthquake process, end to end. In this episode of the “Let’s Talk Exascale” podcast series from the U.S. Department of Energy’s Exascale Computing Project (ECP), David McCallen, leader of ECP’s Earthquake Sim (EQSIM) subproject, discusses his team’s work to help improve the design of more quake-resilient buildings and bridges.

At SC20: Intel Provides Aurora Update as Argonne Developers Use Intel Xe-HP GPUs in Lieu of ‘Ponte Vecchio’

In an update to yesterday’s “Bridge to ‘Ponte Vecchio'” story, today we interviewed, Jeff McVeigh, Intel VP/GM of data center XPU products and solutions, who updated us on developments at Intel with direct bearing on Aurora, including the projected delivery of Ponte Vecchio (unchanged); on Aurora’s deployment (sooner than forecast yesterday by industry analyst firm Hyperion Research); on Intel’s “XPU” cross-architecture strategy and its impact on Aurora application development work ongoing at Argonne; and on the upcoming release of the first production version of oneAPI (next month), Intel’s cross-architecture programming model for CPUs, GPUs, FPGAs and other accelerators.

Exascale Computing Project Issues New Release of Extreme-Scale Scientific Software Stack

The Exascale Computing Project (ECP) has announced the availability of the Extreme-Scale Scientific Software Stack (E4S) v1.2 release. ECP, a collaborative effort of the U.S. Department of Energy’s Office of Science and the National Nuclear Security Administration, said the E4S is a community effort to provide open source software packages for developing, deploying and running […]

SC20: IDEAS Productivity Team Announces Software Events

The IDEAS Productivity Team and others in the HPC community are organizing software-related events at SC20, Nov. 9-19. IDEAS is a family of projects supported by the U.S. Department of Energy addressing challenges in HPC software development productivity and software sustainability in computational science and engineering. One of them, IDEAS-ECP, is supported by DOE’s Exascale Computing Project to […]

Getting to Exascale: Nothing Is Easy

In the weeks leading to today’s Exascale Day observance, we set ourselves the task of asking supercomputing experts about the unique challenges, the particularly vexing problems, of building a computer capable of 10,000,000,000,000,000,000 calculations per second. Readers of this publication might guess, given Intel’s trouble producing the 7nm “Ponte Vecchio” GPU for its delayed Aurora system for Argonne National Laboratory, that compute is the toughest exascale nut to crack. But according to the people we interviewed, the difficulties of engineering exascale-class supercomputing run the systems gamut. As we listened to exascale’s daunting litany of technology difficulties….

What May Come from Exascale? Improved Medicines, Longer-range Batteries, Better Control of 3D Parts, for Starters

As Exascale Day (Oct. 18) approaches, we thought it appropriate to post a recent article from Scott Gibson of the Exascale Computing Project (ECP), an overview of the anticipated advances in scientific discovery enabled by exascale-class supercomputers. Much of this research will focus on atomic physics and its impact on such areas as catalysts used in industrial conversion, molecular dynamics simulations and quantum mechanics used to develop new materials for improved medicines, batteries, sensors and computing devices.

DOE Under Secretary for Science Dabbar’s Exascale Update: Frontier to Be First, Aurora to Be Monitored

As Exascale Day (October 18) approaches, U.S. Department of Energy Under Secretary for Science Paul Dabbar has commented on the hottest exascale question of the day: which of the country’s first three systems will be stood up first? In a recent, far-reaching interview with us, Dabbar confirmed what has been expected for more than two months, that the first U.S. exascale system will not, as planned, be the Intel-powered Aurora system at Argonne National Laboratory. It will instead be HPE-Cray’s Frontier, powered by AMD CPUs and GPUs and designated for Oak Ridge National Laboratory.

Video: Exascale for Earth System Modeling of Storms, Droughts, Sea Level Rise

In this interview, award winning scientist Mark Taylor at Sandia National Laboratories’ Center for Computing Research talks about the use of exascale-class supercomputers – to be delivered to three U.S. Department of Energy national labs in 2021 – for large-scale weather and water resource forecasting. Taylor is chief computational scientist for the DOE’s Energy Exascale […]