Sign up for our newsletter and get the latest big data news and analysis.

SC20: IDEAS Productivity Team Announces Software Events

The IDEAS Productivity Team and others in the HPC community are organizing software-related events at SC20, Nov. 9-19. IDEAS is a family of projects supported by the U.S. Department of Energy addressing challenges in HPC software development productivity and software sustainability in computational science and engineering. One of them, IDEAS-ECP, is supported by DOE’s Exascale Computing Project to […]

Getting to Exascale: Nothing Is Easy

In the weeks leading to today’s Exascale Day observance, we set ourselves the task of asking supercomputing experts about the unique challenges, the particularly vexing problems, of building a computer capable of 10,000,000,000,000,000,000 calculations per second. Readers of this publication might guess, given Intel’s trouble producing the 7nm “Ponte Vecchio” GPU for its delayed Aurora system for Argonne National Laboratory, that compute is the toughest exascale nut to crack. But according to the people we interviewed, the difficulties of engineering exascale-class supercomputing run the systems gamut. As we listened to exascale’s daunting litany of technology difficulties….

Exascale Day: Goodyear’s CTO Talks Exascale’s Coming Industrial Design Advantages

It’s Exascale Awareness Week, the lead-up to Exascale Day this Sunday, Oct. 18 (1018), and while we mainly hear about the anticipated benefits of exascale-class computing for scientific discovery, there is also the economic competitiveness motive for exascale as well. In this video produced by DOE’s Exascale Computing Project (ECP), Goodyear’s Chief Technology Officer Chris […]

DOE Under Secretary for Science Dabbar’s Exascale Update: Frontier to Be First, Aurora to Be Monitored

As Exascale Day (October 18) approaches, U.S. Department of Energy Under Secretary for Science Paul Dabbar has commented on the hottest exascale question of the day: which of the country’s first three systems will be stood up first? In a recent, far-reaching interview with us, Dabbar confirmed what has been expected for more than two months, that the first U.S. exascale system will not, as planned, be the Intel-powered Aurora system at Argonne National Laboratory. It will instead be HPE-Cray’s Frontier, powered by AMD CPUs and GPUs and designated for Oak Ridge National Laboratory.

SC20 Keynote: Climate Science in the Age of Exascale with Dr. Bjorn Stevens

 SC20 has announced its keynote speaker, Prof. Bjorn Stevens of the Max-Planck-Institute for Meteorology in Germany, who will speak on Monday, November 16, starting a day of plenary talks and panels at the virtual conference. Prof. Stevens will discuss how exascale computing is impacting two opportunities that are changing the face of climate science — […]

Radio Free HPC: RISC-V Deep Dive, CTO Interview

In this special edition of Radio Free HPC, we interview Mark Himelstein, RISC-V International Chief Technical Officer, looking at RISC-V from all angles, discussing how the open source ISA is used today and how it might be used tomorrow. And there’s a lot to consider. The European Processor Initiative has selected RISC-V as their accelerator of choice for their first exascale machines. The first system is expected in the 2022-2023 timeframe and will couple RISC-V accelerators with Arm processors.

Video: Exascale for Earth System Modeling of Storms, Droughts, Sea Level Rise

In this interview, award winning scientist Mark Taylor at Sandia National Laboratories’ Center for Computing Research talks about the use of exascale-class supercomputers – to be delivered to three U.S. Department of Energy national labs in 2021 – for large-scale weather and water resource forecasting. Taylor is chief computational scientist for the DOE’s Energy Exascale […]

The Future of HPC for Manufacturing

In this sponsored post, our friends over at Lenovo and Intel explain how High-Performance Computing (HPC) has gone mainstream and forever changed the world of engineering and design. The adoption of HPC systems with Computer-Aided Engineering (CAE) software for high-fidelity modeling and simulation is on the rise among the automotive, aerospace, discrete manufacturing, and healthcare robotics industries, to name a few.

The Hyperion-insideHPC Interviews: Argonne’s David Martin Talks Industrial HPC and Accessible Exascale

David Martin manages the Industry Partnerships and Outreach program at Argonne National Laboratory, and in this interview he talks about the never ending, always expanding demand for more power from HPC users – and the potential for the upcoming exascale systems, including Argonne’s Aurora, may be more accessible than might be expected. “I think that […]

Exascale Exasperation: Why DOE Gave Intel a 2nd Chance; Can Nvidia GPUs Ride to Aurora’s Rescue?

The most talked-about topic in HPC these days – another Intel chip delay and therefore delay of the U.S.’s flagship Aurora exascale system – is something no one directly involved wants to talk about. Not Argonne National Laboratory, where Intel was to install Aurora in 2021; not the Department of Energy’s Exascale Computing Project, guiding […]