Sign up for our newsletter and get the latest big data news and analysis.

12th JLESC Workshop to Be Held Feb. 24-26

The 12th annual Joint Laboratory for Extreme Scale Computing (JLESC) will be held virtually from Wednesday, Feb. 24 to Friday, Feb. 26. It will bring together researchers in high performance computing from the JLESC partners INRIA, the University of Illinois, Argonne National Laboratory, Barcelona Supercomputing Center, Jülich Supercomputing Centre, RIKEN R-CCS and The University of Tennessee to explore […]

Exascale Computing Project Software Technology Issues Updated Capability Assessment Report

Jan. 15, 2021 — Tthe latest update (V2.5) of the US Department of Energy’s (DOE) Exascale Computing Project (ECP) Software Technology (ST) Capability Assessment Report (CAR) is now available here from the ECP website. It provides an overview and assessment of current ECP ST capabilities and activities, giving stakeholders and the broader high-performance computing (HPC) […]

Update from the Frontier of Exascale Software Development

In advance of the scheduled shipment this year of the U.S.’s first exascale supercomputer, Frontier, at Oak Ridge National Laboratory, an international team of software developers led by a University of Delaware professor is working on a plasma physics application. An article published yesterday in the university’s UDaily by Tracey Bryant details the work underway […]

‘Let’s Talk Exascale’: How Supercomputing Is Shaking Up Earthquake Science

Supercomputing is bringing seismic change to earthquake science. A field that historically has predicted by looking back now is moving forward with HPC and physics-based models to comprehensively simulate the earthquake process, end to end. In this episode of the “Let’s Talk Exascale” podcast series from the U.S. Department of Energy’s Exascale Computing Project (ECP), David McCallen, leader of ECP’s Earthquake Sim (EQSIM) subproject, discusses his team’s work to help improve the design of more quake-resilient buildings and bridges.

‘Let’s Talk Exascale’: Storing and Managing Exa-class Data Volumes

In this new edition of “Let’s Talk Exascale” from the Department of Energy’s Exascale Computing Project, the ECP’s Scott Gibson talks with Jim Ahrens of Los Alamos National Laboratory about the project’s data and visualization portfolio. As Ahrens said in this interview, “We can compute much faster than we can save and store data these […]

At SC20: Intel Provides Aurora Update as Argonne Developers Use Intel Xe-HP GPUs in Lieu of ‘Ponte Vecchio’

In an update to yesterday’s “Bridge to ‘Ponte Vecchio'” story, today we interviewed, Jeff McVeigh, Intel VP/GM of data center XPU products and solutions, who updated us on developments at Intel with direct bearing on Aurora, including the projected delivery of Ponte Vecchio (unchanged); on Aurora’s deployment (sooner than forecast yesterday by industry analyst firm Hyperion Research); on Intel’s “XPU” cross-architecture strategy and its impact on Aurora application development work ongoing at Argonne; and on the upcoming release of the first production version of oneAPI (next month), Intel’s cross-architecture programming model for CPUs, GPUs, FPGAs and other accelerators.

Exascale Computing Project Issues New Release of Extreme-Scale Scientific Software Stack

The Exascale Computing Project (ECP) has announced the availability of the Extreme-Scale Scientific Software Stack (E4S) v1.2 release. ECP, a collaborative effort of the U.S. Department of Energy’s Office of Science and the National Nuclear Security Administration, said the E4S is a community effort to provide open source software packages for developing, deploying and running […]

SC20: IDEAS Productivity Team Announces Software Events

The IDEAS Productivity Team and others in the HPC community are organizing software-related events at SC20, Nov. 9-19. IDEAS is a family of projects supported by the U.S. Department of Energy addressing challenges in HPC software development productivity and software sustainability in computational science and engineering. One of them, IDEAS-ECP, is supported by DOE’s Exascale Computing Project to […]

Getting to Exascale: Nothing Is Easy

In the weeks leading to today’s Exascale Day observance, we set ourselves the task of asking supercomputing experts about the unique challenges, the particularly vexing problems, of building a computer capable of 10,000,000,000,000,000,000 calculations per second. Readers of this publication might guess, given Intel’s trouble producing the 7nm “Ponte Vecchio” GPU for its delayed Aurora system for Argonne National Laboratory, that compute is the toughest exascale nut to crack. But according to the people we interviewed, the difficulties of engineering exascale-class supercomputing run the systems gamut. As we listened to exascale’s daunting litany of technology difficulties….

What May Come from Exascale? Improved Medicines, Longer-range Batteries, Better Control of 3D Parts, for Starters

As Exascale Day (Oct. 18) approaches, we thought it appropriate to post a recent article from Scott Gibson of the Exascale Computing Project (ECP), an overview of the anticipated advances in scientific discovery enabled by exascale-class supercomputers. Much of this research will focus on atomic physics and its impact on such areas as catalysts used in industrial conversion, molecular dynamics simulations and quantum mechanics used to develop new materials for improved medicines, batteries, sensors and computing devices.