
Credit: Eugène Delacroix
If it’s not too late to take stock of 2025 in HPC-AI, then holding up the Supercomputing Conference as a trends test site might be a good approach.
I’m no perennial* but I’ve been to about half of the 37 SCs , including all 11 since 2015. Looking back at that year’s conference, SC15, you could argue it was a transition point, a line of demarcation, between two HPC eras.
SC in 2015 still focused on traditional mod/sim scientific computing, and Intel was at the center, their CPUs powering 90-plus percent of HPC workloads. SC was a smaller event then, it had the clubby feel of an annual reunion for the HPC community. The national labs and their next-gen leadership supercomputers were a major point of interest.
But change was in the air at SC15, you could sense it. Attendance overburdened the Austin facility where it was held, and there a growing number of vendors marketing not just to the labs and academia but to the enterprise.
SC15 was the last hurrah for the era of Intel CPU- and scientific computing-centric HPC. The next few years saw the “big bang” of technology disaggregation, alternative architectures (NVIDIA GPUs) and non-Intel CPUs (AMD, ARM) were embraced, with AI increasingly in the workload mix. The spur for much of this was NVIDIA and its GPUs enabling development of AI at scale while also pushing other system components (interconnects, memory, storage, networks) to keep pace with GPU compute. The big bang has put HPC in a state of perpetual acceleration in the pace of technology change, dominated by AI.
Jump ahead to last November and all of these trends were in evidence at SC25, only more so. Let’s run through some of the sectors where revolutionary advances are happening.
The supercomputing technology with the potential to bring about the greatest change is, of course, quantum computing – that is, practical, useful quantum. The good news is that the expected timeline for useful quantum has come down from being 10-20 years away to, maybe, seven to 10 years, possibly less. That said, at least one vendor, D-Wave, insists it has customers doing useful work with quantum today.
We met with a slew of quantum vendors at SC25, all of them convinced their development strategies hold the greatest promise for delivering usefulness in the shortest timeframes. These include Paris-based Alice and Bob, Quantum Motion, Q-CTRL, Classiq and Orca.
Another arena with potentially revolutionary impact is chip-to-chip interconnect – optical I/O and silicon photonics. The use of lasers to move data rather than copper interconnects is estimated at two or three years aways, according to one of the sector’s leading lights, Keren Bergman (see our recent @HPCpodcast episode with her) of Columbia University (who presented at SC25). The technology works, it’s a proven capability – one that could significantly increase compute performance while reducing energy consumption – the final challenge is integrating co-packaged optical I/O (CPO) into chip manufacturing processes for production at scale.
In this sector, we talked with Ayar Labs, Lightmatter and Broadcom.
Somewhere between quantum and optical I/O is optical computing, and to hear technologists, such as the folks at Lightsolver, talk about it, there’s great promise in using laser’s light speed and parallelism to solve complex problems. They also argue that optical does not have the problems, such as extreme cooling, associated with some quantum modalities.
With the time and inclination, we could have spent most of the conference meeting with liquid cooling vendors pitching products for thermal control of servers and AI data centers – the construction and refurbishment of which is likely the world’s single hottest economic driver today.
The major impediment to AI data centers is access to electrical power, hence the growth of liquid cooling for reducing power requirements. There are an array of liquid cooling methodologies on the market, but a theme at SC25 from the liquid coolers, such as Ecolab, is that deployment support services are as important as liquid cooling products themselves. Among the liquid cooling companies we met with: Castrol ON (direct-to-chip and immersion cooling), Colder Products Company (quick disconnect couplings), CoolIT (Direct Liquid Cooling – DLC), Danfoss (DTC, leak-free couplings, compressors and other products)and Valvoline (DTC, immersion and heat exchange fluids).
We all know data, data management and data storage, combined, is the bedrock of AI at scale, without the storage and management of enormous data stores AI is not possible. A big theme in this arena is simplification of data management – making data management more manageable – including a unified view and access to data wherever it may reside. We discussed these themes with Hammerspace, DDN, Quobyte, VDURA, Spectra Logic, Starfish and Arcitecta.
We met with several of the hardware vendors, including the always-interesting Liqid and their message around composable, disaggregated computing, which has the potential cut energy use while fundamentally altering how computing components are arrayed.
Lenovo updated us on its HPC-AI data center server portfolio along with the company’s emphasis on reducing environmental impact (liquid cooling) and capabilities that support “AI trust,” about which we expect to hear more from them in 2026.
And Cirrascale discussed directions for their public and private dedicated, multi-GPU/neocloud solutions.
SC25 spotlighted potentially revolutionary technologies – long regarded, according to the old tech joke, as “always (X) years away” – transitioning from aspirational toward realized solutions. As different as the industry in 2025 is from 2015, in 2030 it may become something very different altogether.
Long live the permanent revolution in HPC-AI!
* The SC Perennials: the founding generation of HPC professionals who have attended every SC conference since the first in 1988.



