insideHPC-Hyperion Research Interview: Argonne’s Rick Stevens on the Future of Everything – U.S. Post-Exascale Strategy, AI for Science, HPC in 2040 and an Aurora Install Update

In this interview conducted on behalf of HPC analyst firm Hyperion Research, we spoke with Argonne National Laboratory’s Rick Stevens about the present and future of HPC. The starting point for this conversation is a presentation Stevens gave at a Hyperion event in Washington related to implementation of the CHIPS and Science Act and includes his insights on the post-exascale build-out of an integrated network of U.S. supercomputing capacity (the Integrated Research Infrastructure, or IRI). We then look at AI for science and the use of data-driven modeling and simulation, which shows the potential to deliver major performance gains for researchers….

Conventional Wisdom Watch: Matsuoka & Co. Take on 12 Myths of HPC

A group of HPC thinkers, including the estimable Satoshi Matsuoka of the RIKEN Center for Computational Science in Japan, have come together to challenge common lines of thought they say have become, to varying degrees, accepted wisdom in HPC. In a paper entitled “Myths and Legends of High-Performance Computing” appearing this week on the Arvix […]

@HPCpodcast: Zettascale Is Coming – But What About Exascale?

After SC21, Patrick Kennedy at Serve the Home got a scoop when he met with Raja Koduri, SVP/GM of Intel’s Accelerated Computing Systems and Graphics (AXG) Group, to discuss Intel’s zettascale projections and plans, anticipating delivery by 2027. Or maybe 2028. By way of definition, a zettaflop is 1,000 exaflops, or one sextillion (1021) floating point operations per second, a thousand times more powerful than an exascale system. But is this  realistic, considering exascale hasn’t quite been made official, at least not in the U.S.? Tune in to this episode of the @HPCpodcast and let us know what you think.