Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


At SC20: Intel Provides Aurora Update as Argonne Developers Use Intel Xe-HP GPUs in Lieu of ‘Ponte Vecchio’

In an update to yesterday’s “Bridge to ‘Ponte Vecchio'” story, today we interviewed, Jeff McVeigh, Intel VP/GM of data center XPU products and solutions, who updated us on developments at Intel with direct bearing on Aurora, including the projected delivery of Ponte Vecchio (unchanged); on Aurora’s deployment (sooner than forecast yesterday by industry analyst firm Hyperion Research); on Intel’s “XPU” cross-architecture strategy and its impact on Aurora application development work ongoing at Argonne; and on the upcoming release of the first production version of oneAPI (next month), Intel’s cross-architecture programming model for CPUs, GPUs, FPGAs and other accelerators.

Exascale Computing Project Issues New Release of Extreme-Scale Scientific Software Stack

The Exascale Computing Project (ECP) has announced the availability of the Extreme-Scale Scientific Software Stack (E4S) v1.2 release. ECP, a collaborative effort of the U.S. Department of Energy’s Office of Science and the National Nuclear Security Administration, said the E4S is a community effort to provide open source software packages for developing, deploying and running […]

SC20: IDEAS Productivity Team Announces Software Events

The IDEAS Productivity Team and others in the HPC community are organizing software-related events at SC20, Nov. 9-19. IDEAS is a family of projects supported by the U.S. Department of Energy addressing challenges in HPC software development productivity and software sustainability in computational science and engineering. One of them, IDEAS-ECP, is supported by DOE’s Exascale Computing Project to […]

Getting to Exascale: Nothing Is Easy

In the weeks leading to today’s Exascale Day observance, we set ourselves the task of asking supercomputing experts about the unique challenges, the particularly vexing problems, of building a computer capable of 10,000,000,000,000,000,000 calculations per second. Readers of this publication might guess, given Intel’s trouble producing the 7nm “Ponte Vecchio” GPU for its delayed Aurora system for Argonne National Laboratory, that compute is the toughest exascale nut to crack. But according to the people we interviewed, the difficulties of engineering exascale-class supercomputing run the systems gamut. As we listened to exascale’s daunting litany of technology difficulties….

What May Come from Exascale? Improved Medicines, Longer-range Batteries, Better Control of 3D Parts, for Starters

As Exascale Day (Oct. 18) approaches, we thought it appropriate to post a recent article from Scott Gibson of the Exascale Computing Project (ECP), an overview of the anticipated advances in scientific discovery enabled by exascale-class supercomputers. Much of this research will focus on atomic physics and its impact on such areas as catalysts used in industrial conversion, molecular dynamics simulations and quantum mechanics used to develop new materials for improved medicines, batteries, sensors and computing devices.

Exascale Day: Goodyear’s CTO Talks Exascale’s Coming Industrial Design Advantages

It’s Exascale Awareness Week, the lead-up to Exascale Day this Sunday, Oct. 18 (1018), and while we mainly hear about the anticipated benefits of exascale-class computing for scientific discovery, there is also the economic competitiveness motive for exascale as well. In this video produced by DOE’s Exascale Computing Project (ECP), Goodyear’s Chief Technology Officer Chris […]

DOE Under Secretary for Science Dabbar’s Exascale Update: Frontier to Be First, Aurora to Be Monitored

As Exascale Day (October 18) approaches, U.S. Department of Energy Under Secretary for Science Paul Dabbar has commented on the hottest exascale question of the day: which of the country’s first three systems will be stood up first? In a recent, far-reaching interview with us, Dabbar confirmed what has been expected for more than two months, that the first U.S. exascale system will not, as planned, be the Intel-powered Aurora system at Argonne National Laboratory. It will instead be HPE-Cray’s Frontier, powered by AMD CPUs and GPUs and designated for Oak Ridge National Laboratory.

Video: Exascale for Earth System Modeling of Storms, Droughts, Sea Level Rise

In this interview, award winning scientist Mark Taylor at Sandia National Laboratories’ Center for Computing Research talks about the use of exascale-class supercomputers – to be delivered to three U.S. Department of Energy national labs in 2021 – for large-scale weather and water resource forecasting. Taylor is chief computational scientist for the DOE’s Energy Exascale […]

Let’s Talk Exascale Podcast – ECP Leadership Discuss Project Highlights, Challenges, Impact

The U.S. Department of Energy’s  Exascale Computing Project (ECP) is tasked with guiding the U.S. effort to build a “capable exascale ecosystem” by the early to mid-2020s and is part of the Exascale Computing Initiative, a partnership between DOE’s Office of Science and the National Nuclear Security Administration. In this podcast, members of ECP’s leadership […]

Exascale Exasperation: Why DOE Gave Intel a 2nd Chance; Can Nvidia GPUs Ride to Aurora’s Rescue?

The most talked-about topic in HPC these days – another Intel chip delay and therefore delay of the U.S.’s flagship Aurora exascale system – is something no one directly involved wants to talk about. Not Argonne National Laboratory, where Intel was to install Aurora in 2021; not the Department of Energy’s Exascale Computing Project, guiding […]