By Mike Bernhardt, Founder and CEO, Team Libra
We stand on the edge of a towering precipice, staring down at the wild, uncharted river where AI and high-performance computing converge. Below us, the current surges with breathtaking speed—rapids of innovation, whirlpools of disruption. The world is moving faster than ever, and we can’t just watch from the cliff. To navigate this torrent, we need a skilled crew, a resilient vessel, and the courage to launch. Because once we’re in the water, there’s no turning back.
To ensure the nation’s socio-economic well-being in this new, uncharted era of artificial intelligence (AI) and high-performance computing (HPC), the rapid and accurate advancement of scientific discovery must be treated not merely as a strategic priority — but as a national imperative.
For decades, while several federal agencies have contributed significantly to advancing computational capabilities, having a positive impact on the nation, one federal agency in particular — the U.S. Department of Energy (DOE) — has provided dedicated leadership and funding to catalyze progress in advanced scientific computing. By harnessing the world’s most powerful computing systems, along with the expertise of many of the world’s brightest scientists and researchers, the DOE has demonstrated a history of success as a good partner with industry, fueling sustained innovation as we’ve seen with Cray, HPE, NVIDIA, and others.
This long-term investment has led to major breakthroughs in fusion energy, materials science, national security, and many other areas of national importance.
Historically, advances in computer hardware have played a central role in accelerating scientific discovery. But the Department of Energy (DOE) recognized early on that hardware alone isn’t enough. Real progress requires a tightly integrated approach — where hardware innovation is developed in tandem with advanced software systems and tools. The success of the Exascale Computing Project is a testament to that vision. DOE’s leadership not only pushed the boundaries of hardware but also prioritized the development of scalable software and cross-disciplinary teams. By aligning these efforts, the nation’s first exascale systems were ready to deliver scientific breakthroughs from day one. It’s a powerful model—one that deserves to be both repeated and expanded.
To make a point, many people I’ve interacted with don’t think of Large Language Models (LLMs) as software. But they are in fact, software. They mark a new class of software — one that combines machine learning, encoded knowledge, and probabilistic reasoning in ways that traditional programs simply can’t. LLMs will play a key role in next-generation scientific, technical, and financial software ecosystems.
The National Lab ecosystem serves as a vital bridge — an honest broker uniquely positioned between government, academia, and industry. It understands the distinct roles and needs of each, fostering collaboration while remaining steadfast in its responsibility to the taxpayer.
Looking ahead, the central challenge is clear: we must streamline and accelerate the partnering process. In an era defined by rapid change and constant disruption, particularly in scientific and technical computing, our ability to adapt depends on it.
As we enter a complex and deeply interconnected era—one increasingly shaped by artificial intelligence in all its forms—we must reimagine the foundational role of scientific software. What’s needed is a next-generation ecosystem that brings together the full talent of the scientific and technical computing community to co-develop AI-enabled tools that accelerate cross-disciplinary team science. This is not just an opportunity—it is essential to sustaining the nation’s technological leadership and must be treated as a top funding priority.
The software component of this ecosystem must be funded and nurtured as part of a living ecosystem—one as essential to mission success as the nation’s investments in facilities, instruments, and talent. Recognizing this is not optional; it’s essential. While DOE will most likely lead in this development, this evolution to effectively integrate AI and HPC capabilities must take hold across government, academia, and industry to secure the future of innovation.
To make a point, many people I’ve interacted with don’t think of Large Language Models (LLMs) as software. But they are in fact, software. They mark a new class of software — one that combines machine learning, encoded knowledge, and probabilistic reasoning in ways that traditional programs simply can’t. LLMs will play a key role in next-generation scientific, technical, and financial software ecosystems.
AI is not self-evolving in any form that will benefit the nation without proper guidance. The contribution of the human workforce is more important than ever.
At the core of a thriving scientific software ecosystem are the cross-disciplinary professionals who develop, maintain, and scale a growing suite of tools and applications—research software engineers, domain scientists, data curators, user interface specialists, and infrastructure architects. Yet these critical roles often fall through the cracks of traditional funding models.
To address this gap, expanded funding priorities must recognize and support this essential workforce. Funding sources, whether it be federal agencies or corporate and private investors, must invest in sustainable career paths that embed this critical expertise into long-term programmatic support. Doing so will ensure lasting benefits for the nation’s research enterprise.
![]()
Moreover, next-generation research increasingly depends on heterogeneous computing environments—from petascale and exascale platforms to edge devices. Any investments in HPC hardware must be matched, at a minimum, by a proportional commitment to these next-generation software layers that bridge human ingenuity with machine performance. Otherwise, we risk building engines of discovery that are less than effective or unreliable, due to a lack of usable, adaptable, and maintainable user interfaces.
As artificial intelligence and machine learning become deeply embedded in the nation’s scientific workflows—a transformation already well underway—data integrity, provenance, and reproducibility are increasingly viewed as foundational in many sectors. For others, particularly in fiercely competitive environments, speed and agility remain paramount. Rather than creating a conflict, these priorities highlight the need for stronger public-private partnerships on the challenges of AI adoption and implementation. By working together, we can shape AI and HPC software ecosystems that serve both national interests and market innovation—achieving the best of both worlds in a time of rising stakes.
Finally, contrary to much of the public discourse that AI can replace or function without human intervention, the various funding organizations must place emphasis on the adoption curve for humans, not just building new systems. AI should be seen and used as a tool to support the human side of the ecosystem. If our software ecosystems are too complex or designed to be used by an elite few, they will fail to have the desired impact. Funding structures should prioritize usability, access, documentation, and training as core deliverables—not peripheral considerations.
Government, Industry, and Academia Collaboration in Hardware, Software, and Human Assets is Paramount to the Effective Implementation, Adoption, and Success of Future AI and HPC Systems.
Now is the time to reimagine how we support the new era of scientific software development. Rather than relying solely on traditional public funding, AI and HPC stakeholders should explore new strategies and new simplified models that foster collaborative investments among government, academia, and private sectors, with safeguards and incentives. These partnerships could accelerate innovation while strengthening the nation’s technological edge and socio-economic resilience.
Scientific software is far more than an academic concern—it’s a foundational driver of national competitiveness. From accelerating AI-driven breakthroughs in clean energy and materials science to strengthening our agricultural systems and national security, its impact reaches across critical sectors of society.
The question is no longer whether we can afford to invest in a robust scientific software ecosystem, but whether we can afford not to.
Mike Bernhardt is Founder, CEO & Chief Strategist of the marketing and strategic communications firm Team Libra, LLC. From 2016-2024, he was Senior Communications Lead and Leadership Advisor for the Exascale Computing Project.



