Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

 Preparing for Exascale: ALCF’s Aurora Early Science Program and Visualizing Cancer’s Spread

Scientists are preparing a cancer modeling study to run on Argonne’s upcoming Aurora supercomputer before it goes online in 2022. The U.S. Department of Energy’s (DOE) Argonne National Laboratory will be home to one of the nation’s first exascale supercomputers – Aurora is scheduled to arrive in 2022. To prepare codes for the architecture and scale of […]

‘Intel Is Back’: Gelsinger Delivers Upbeat Update, Expanded Manufacturing in U.S., Europe

New Intel CEO Pat Gelsinger delivered an upbeat corporate update this afternoon in the form of a webinar that emphasized Intel’s integrated device manufacturing, “IDM.2.0” strategy for manufacturing and product development that combines the company’s internal network of factories with third-party outsourced capacity and new Intel foundries in the U.S. and Europe. “As I hope […]

Let’s Talk Exascale: Getting Applications Aurora-Ready

This episode of Let’s Talk Episode from DOE’s Exascale Computing Project is the first in a series on best practices in preparing applications for the upcoming Aurora exascale supercomputer at the US Department of Energy’s Argonne National Laboratory. In these discussions, the emphasis will be on optimizing code to run on GPUs and providing developers […]

At SC20: Intel Provides Aurora Update as Argonne Developers Use Intel Xe-HP GPUs in Lieu of ‘Ponte Vecchio’

In an update to yesterday’s “Bridge to ‘Ponte Vecchio'” story, today we interviewed, Jeff McVeigh, Intel VP/GM of data center XPU products and solutions, who updated us on developments at Intel with direct bearing on Aurora, including the projected delivery of Ponte Vecchio (unchanged); on Aurora’s deployment (sooner than forecast yesterday by industry analyst firm Hyperion Research); on Intel’s “XPU” cross-architecture strategy and its impact on Aurora application development work ongoing at Argonne; and on the upcoming release of the first production version of oneAPI (next month), Intel’s cross-architecture programming model for CPUs, GPUs, FPGAs and other accelerators.

At SC20: A Bridge to ‘Ponte Vecchio’: Argonne Aurora Developers Using Substitute Intel Xe-HP GPUs

Intel and Argonne National Laboratory said today they are using GPUs based on Intel’s Xe-HP microarchitecture and Intel oneAPI toolkits for development of scientific applications to be used on the Aurora exascale system — in anticipation of later delivery of Intel 7nm ‘Ponte Vecchio’ GPUs, which will drive Aurora when the delayed system is deployed […]

The Hyperion-insideHPC Interviews: Argonne’s David Martin Talks Industrial HPC and Accessible Exascale

David Martin manages the Industry Partnerships and Outreach program at Argonne National Laboratory, and in this interview he talks about the never ending, always expanding demand for more power from HPC users – and the potential for the upcoming exascale systems, including Argonne’s Aurora, may be more accessible than might be expected. “I think that […]

Exascale Exasperation: Why DOE Gave Intel a 2nd Chance; Can Nvidia GPUs Ride to Aurora’s Rescue?

The most talked-about topic in HPC these days – another Intel chip delay and therefore delay of the U.S.’s flagship Aurora exascale system – is something no one directly involved wants to talk about. Not Argonne National Laboratory, where Intel was to install Aurora in 2021; not the Department of Energy’s Exascale Computing Project, guiding […]

Another Intel 7nm Chip Delay – What Does it Mean for Aurora Exascale?

The saga of Intel’s inabilities to deliver a 7nm process chip and a supercomputer called Aurora to Argonne National Laboratory opened new chapters yesterday with Intel CEO Bob Swan’s statements that the company’s 7nm “Ponte Vecchio” GPU, integral to its Aurora exascale system scheduled for delivery next year, will be delayed at least six months. […]

Building for the Future Aurora Supercomputer at Argonne

“Argonne National Labs has created a process to assist in moving large applications to a new system. Their current HPC system, Mira will give way to the next generation system, Aurora, which is part of the collaboration of Oak Ridge, Argonne, and Livermore (CORAL) joint procurement. Since Aurora contains technology that was not available in Mira, the challenge is to give scientists and developers access to some of the new technology, well before the new system goes online. This allows for a more productive environment once the full scale new system is up.”