At the HPC User Forum: 2 Full Days in Tucson of HPC and AI

Print Friendly, PDF & Email

This week saw HPC industry analyst firm Hyperion Research host the HPC User Forum, a supercomputing conference with an end-user emphasis held four times a year (two in the U.S. and two internationally) offering two intensive days of presentations and panels involving commercial and government users, along with hardware and software vendors.

The second of the two User Forum events held this year in the U.S. took place September 6 and 7 in Tucson at the Loews Ventana Canyon resort, and it was almost as much a social and networking gathering as it was a venue to see, hear and learn about advanced HPC and AI deployment strategies.

Not surprising, the big theme running throughout the conference: big AI, generative AI, chips that enable AI, AI in the cloud, on premises and at the edge, AI implementation planning, AI algorithms, AI practices, pitfalls and ethics – AI took on a pervasive presence over the two-day event.

If you haven’t been to an HPC User Forum, you should know it’s session-packed, starting early and going late, so if you want recreational time at the venue, tack on a day before the conference starts or add an extra day. Breakfast begins at 7:15, presentations start at 8 and end at 5:30. Then there’s a dinner keynote.

In all, there were more than 30 sessions ranging in length from 15 minutes to a three-hour panel discussion on energy efficiency in supercomputing. Most presentations are a half hour long, keeping the agenda moving at a good clip.

The conference was kicked off with an HPC market update from Hyperion Research CEO Earl Joseph, who discussed the prevalence of generative AI adoption throughout government and industry, along with industry growth, HPC in the cloud, quantum computing, exascale and leadership-class supercomputing and other trends.

The theme for the first morning’s sessions was “New Advances in Using HPC and AI Combined: Examples and Successes Using AI, Generative AI and Large Language Models,” featuring presentations from Peptide Therapeutics, a discussion on AI and earth sciences from the National Energy Research Scientific Computing (NERSC) and Lawrence Berkeley National Laboratory, Hyperion’s Bob Sorensen spoke on how LLMs are evolving and Google’s creation of new HPC and AI solutions.

Gary Marchant of Arizona State University spoke on how AI is disrupting the legal industry – including the question of whether a law suit can be brought when an AI system makes a decision on its own.

This was followed by a discussion on the exascale computing era from Doug Kothe, chief research officer and associate laboratories director of advanced science and technology at Sandia National Laboratories. The former director of the Dept. of Energy’s Exascale Computing Project also was associate laboratory director, computing and computational science directorate at Oak Ridge National Lab.

This was followed by HPC site updates from Los Alamos National Lab from HPC data storage expert Gary Grider and update on the Exascale Computing Project from Director, Software Technology Mike Heroux of Sandia National Lab.

Ti Leggett of the Argonne Leadership Computing Facility delivers an Aurora exascale update

Suzy Tichenor, director, Industrial Partnership Program for the Computing and Computational Sciences Directorate at Oak Ridge National Lab, then led a panel discussion sustainability and energy efficiency in leadership-class data center that included manager from Lawrence Berkeley Lab, the Texas Advanced Computing Center, NASA, the UK’s Distributed Research using Advanced Computing DiRAC() project and Argonne National Lab.

Steve Chien of the Jet Propulsion Laboratory and NASA spoke on “AI in Space and the Hunt for Life beyond Earth” that included detailed accounts of how robots that explore planets (Mars ) and moons (ours and Jupiter’s Europa) go about their tasks. (Note that after the dinner presentation, Chien told this reporter that on whether intelligent life existing elsewhere he’s an agnostic.)

Day two of the conference began with presentations on leadership-class supercomputing and exascale updates from Ti Leggett of Argonne Lab on the installation of the Aurora exascale system; on European leadership computing from the EU’s Leonardo Flores; on the SiPearl European processor from Craig Prunty; on the Perlmutter AI supercomputer and NERSC’s next HPC procurement  from Lawrence Berkeley Lab’s Nick Wright; and on HPC at the UK’s DiRAC effort from the project’s Simon Burbridge.

Hyperion’s Mark Nossokoff spoke on new research perspectives on sustainability, followed by the several commentators discussing the impact of the Dept. of Energy’s HPC4Energy Innovation program and industrial partnerships.  This included speakers from Lawrence Livermore, Oak Ridge, Sandia and National Renewable Energy (NREL) laboratories, along with Shell Oil, 8 Rivers and Procter & Gamble.

Later in the day, HPC site updates were provided by the U.S. Division of Homeland Security, KAUST (King Abdullah University of Science and Technology of Saudi Arabia and NREL.

HPC vendor updates included Microsoft Azure, Amazon Web Services, HPE (including the Greenlake cloud service), Lenovo, Cognitive Science and Resources, and Google.

The second day’s dinner keynote was delivered by Andrew Jones, Microsoft’s Future HPC and AI Capabilities Lead, who delivered far-ranging remarks on the impact of large language models, weather forecasting and HPC, and supercomputing in 2050.