Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: How Exascale Computing Could Help Boost Energy Production

In this podcast, Tom Evans, technical lead for ECP’s Energy Applications projects, shares about the motivations, progress, and aspirations on the path to the exascale. “Evans describes the unprecedented calculations expected at the exascale, the example of taking wind energy simulations much further, and the movement toward the use of more-general-purpose programming tools.”

Video: Reliving the First Moon Landing with NVIDIA RTX real-time Ray Tracing

In this video, Apollo 11 astronaut Buzz Aldrin looks back at the first moon landing with help from a reenactment powered by NVIDIA RTX GPUs with real-time ray tracing technology. “The result: a beautiful, near-cinematic depiction of one of history’s great moments. That’s thanks to NVIDIA RTX GPUs, which allowed our demo team to create an interactive visualization that incorporates light in the way it actually works, giving the scene uncanny realism.”

Video: A Full Week of Fueling Innovation at ISC 2019

In this video, Michael Feldman of The Next Platform and Florina Ciorba of University of Basel look back on ISC 2019. “ISC 2019 brought together 3,573 HPC practitioners and enthusiasts interested in high performance computing, storage, networking, and AI. The theme of this year’s conference was Fueling Innovation.”

Podcast: HPC Market Eyes $44B in 5 Years

In this podcast, the Radio Free HPC team looks at a new projections from Hyperion Research that has the HPC+AI market growing to $44B, in 5 years. “The industry is hitting on all cylinders, benefiting from the Exascale race, AI coming to the enterprise, and it’s customary slow but always steady growth. The big news continues to be AI fundamentally bringing HPC closer to the mainstream of enterprise computing whether it is on-prem, in a co-location facility, or in a public cloud.”

Brent Gorda from Arm looks back at ISC 2019

In this special guest feature, Brent Gorda from Arm shares his impressions of ISC 2019 in Frankfurt. “From the perspective of Arm in HPC, it was an excellent event with several high-profile announcements that caught everyone’s attention. The Arm ecosystem was well represented with our partners visible on the show floor and around town.”

John Shalf from LBNL on Computing Challenges Beyond Moore’s Law

In this special guest feature from Scientific Computing World, Robert Roe interviews John Shalf from LBNL on the development of digital computing in the post Moore’s law era. “In his keynote speech at the ISC conference in Frankfurt, Shalf described the lab-wide project at Berkeley and the DOE’s efforts to overcome these challenges through the development acceleration of the design of new computing technologies.”

Podcast: Tackling Massive Scientific Challenges with AI/HPC Convergence

In this Chip Chat podcast, Brandon Draeger from Cray describes the unique needs of HPC customers and how new Intel technologies in Cray systems are helping to deliver improved performance and scalability. “More and more, we are seeing the convergence of AI and HPC – users investigating how they can use AI to complement what they are already doing with their HPC workloads. This includes using machine and deep learning to analyze results from a simulation, or using AI techniques to steer where to take a simulation on the fly.”

HPC4Energy Innovation Program Funds Manufacturing Research

Today the High Performance Computing for Energy Innovation program (HPC4EI) announced the nine public/private projects awarded more than $2 million from the DOE, with aims of improving energy production, enhancing or developing new material properties and reducing energy usage in manufacturing. “We see increasing interest by both industry and the DOE Applied Energy Offices to leverage the world-class computational capabilities of leading national laboratories to address the significant challenges in improving the efficiency of our national energy footprint,” said HPC4EI Director Robin Miles.”

Supercomputing Potential Impacts of a Major Quake by Building Location and Size

National lab researchers from Lawrence Livermore and Berkeley Lab are using supercomputers to quantify earthquake hazard and risk across the Bay Area. Their work is focused on the impact of high-frequency ground motion on thousands of representative different-sized buildings spread out across the California region. “While working closely with the NERSC operations team in a simulation last week, we used essentially the entire Cori machine – 8,192 nodes, and 524,288 cores – to execute an unprecedented 5-hertz run of the entire San Francisco Bay Area region for a magnitude 7 Hayward Fault earthquake.”

Argonne Team Breaks Record with 2.9 Petabytes Globus Data Transfer

Today the Globus research data management service announced the largest single file transfer in its history: a team led by Argonne National Laboratory scientists moved 2.9 petabytes of data as part of a research project involving three of the largest cosmological simulations to date. “With exascale imminent, AI on the rise, HPC systems proliferating, and research teams more distributed than ever, fast, secure, reliable data movement and management are now more important than ever,” said Ian Foster.