Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: A Look inside the El Capitan Supercomputer coming to LLNL

In this podcast, the Radio Free HPC team looks at some of the more interesting configuration aspects of the pending El Capitan exascale supercomputer coming to LLNL in 2023. “Dan talks about the briefing he received on the new Lawrence Livermore El Capitan system to be built by HPE/Cray. This new $600 million system will be fueled by the AMD Genoa processor coupled with AMD’s Instinct GPUs. Performance should come in at TWO 64-bit exaflops peak, which is very, very sporty.”

Exascale Computing Project Releases Milestone Report

The US Department of Energy’s Exascale Computing Project (ECP) has published a milestone report that summarizes the status of all thirty ECP Application Development (AD) subprojects at the end of fiscal year 2019. “This report contains not only an accurate snapshot of each subproject’s current status but also represents an unprecedentedly broad account of experiences porting large scientific applications to next-generation high-performance computing architectures.”

LLNL Researchers aid COVID-19 response in anti-viral research

Backed by five high performance computing (HPC) clusters and years of expertise in vaccine and countermeasure development, a COVID-19 response team of LLNL researchers from various disciplines has used modeling & simulation, along with machine learning, to identify about 20 initial, yet promising, antibody designs from a nearly infinite set of potentials and to examine millions of small molecules that could have anti-viral properties. The candidates will need to be synthesized and experimentally tested — which Lab researchers cautioned could take time — but progress is being made.

IBM & DOE Launch COVID-19 High Performance Computing Consortium

Today, IBM, in collaboration with the DOE, launched the COVID-19 High Performance Computing Consortium. “The consortium bring together an unprecedented amount of supercomputing power—16 systems with more than 330 petaflops, 775,000 CPU cores, 34,000 GPUs, and counting—to help researchers everywhere tackle this global challenge. These high-performance computing systems allow researchers to run very large numbers of calculations in epidemiology, bioinformatics, and molecular modeling in hours or days, not weeks, months or years.”

Podcast: Delivering Exascale Machine Learning Algorithms at the ExaLearn Project

In this Let’s Talk Exascale podcast, researchers from the ECP describe progress at the ExaLearn project. ExaLearn is focused on ML and related activities to inform the requirements for these pending exascale machines. “ExaLearn’s algorithms and tools will be used by the ECP applications, other ECP co-design centers, and DOE experimental facilities and leadership-class computing facilities.”

Podcast: How Community Collaboration Drives Compiler Technology at the LLVM Project

In this Let’s Talk Exascale podcast, Hal Finkel of Argonne National Laboratory describes how community collaboration is driving compiler infrastructure at the LLVM project. “LLVM is important to a wide swath of technology professionals. Contributions shaping its development have come from individuals, academia, DOE and other government entities, and industry, including some of the most prominent tech companies in the world, both inside and outside of the traditional high-performance computing space.”

Podcast: Helping Applications Use Future Architectures with First-Rate Discretization Libraries

In this Let’s Talk Exascale podcast, Tzanio Kolev from LLNL describes the work at Center for Efficient Exascale Discretizations (CEED), one of six co-design centers within the Exascale Computing Project. “Discretization methods divide a large simulation into smaller components in preparation for computer analysis. CEED is ECP’s hub for partial differential equation discretizations on unstructured grids, providing user-friendly software, mathematical expertise, community standards, benchmarks, and miniapps as well as coordination between the applications, hardware vendors, and Software Technology (ST) efforts in ECP.”

Moving Massive Amounts of Data across Any Distance Efficiently

Chin Fang from Zettar gave this talk at the Rice Oil & Gas Conference. “The objective of this talk is to present two on-going projects aiming at improving and ensuring highly efficient bulk transferring or streaming of massive amounts of data over digital connections across any distance. It examines the current state of the art, a few very common misconceptions, the differences among the three major type of data movement solutions, a current initiative attempting to improve the data movement efficiency from the ground up, and another multi-stage project that shows how to conduct long distance large scale data movement at speed and scale internationally.”

AMD to Power 2 Exaflop El Capitan Supercomputer from HPE

Today HPE announced that it will deliver the world’s fastest exascale-class supercomputer for NNSA at a record-breaking speed of 2 exaflops – 10X faster than today’s most powerful supercomputer. ‘El Capitan is expected to be delivered in early 2023 and will be managed and hosted by LLNL for use by the three NNSA national laboratories: LLNL, Sandia, and Los Alamos. The system will enable advanced simulation and modeling to support the U.S. nuclear stockpile and ensure its reliability and security.”

Stepping up Efficiency for Exascale with FPGAs at the LEGaTO Project

In this special guest feature from Scientific Computing World, Robert Roe writes that European researchers have developed a framework to boost the energy efficiency of CPU, GPU and FPGA resources. “Legato (Low Energy Toolset for Heterogeneous Computing) is one such project with the lofty aims of developing a programming framework to support heterogeneous systems of CPU, GPU and FPGA resources that can offload specific tasks to different acceleration technologies through its own runtime system.”