Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Exascale Computing Project Software Activities

Mike Heroux from Sandia National Labs gave this talk at the HPC User Forum. “The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security.The goal of the ECP Software Technology focus area is to develop a comprehensive and coherent software stack that will enable application developers to productively write highly parallel applications that can portably target diverse exascale architectures.”

Video: EuroHPC – The EU Strategy in HPC

In this video from the HPC User Forum in Santa Fe, Leonardo Flores from the European Commission presents: EuroHPC – The EU Strategy in HPC. “EuroHPC is a joint collaboration between European countries and the European Union about developing and supporting exascale supercomputing by 2022/2023. EuroHPC will permit the EU and participating countries to coordinate their efforts and share resources with the objective of deploying in Europe a world-class supercomputing infrastructure and a competitive innovation ecosystem in supercomputing technologies, applications and skills.”

Podcast: How the EZ Project is Providing Exascale with Lossy Compression for Scientific Data

In this podcast, Franck Cappello from Argonne describes EZ, an effort to effort to compress and reduce the enormous scientific data sets that some of the ECP applications are producing. “There are different approaches to solving the problem. One is called lossless compression, a data-reduction technique that doesn’t lose any information or introduce any noise. The drawback with lossless compression, however, is that user-entry floating-point values are very difficult to compress: the best effort reduces data by a factor of two. In contrast, ECP applications seek a data reduction factor of 10, 30, or even more.”

Video: Enabling Applications to Exploit SmartNICs and FPGAs

Sean Hefty and Venkata Krishnan from Intel gave this talk at the OpenFabrics Workshop in Austin. “Advances in Smart NIC/FPGA with integrated network interface allow acceleration of application-specific computation to be performed alongside communication. Participants will learn about the potential for Smart NIC/FPGA application acceleration and will have the opportunity to contribute application expertise and domain knowledge to a discussion of how Smart NIC/FPGA acceleration technology can bring individual applications into the Exascale era.”

Video: Cray Announces First Exascale System

In this video, Cray CEO Pete Ungaro announces Aurora – Argonne National Laboratory’s forthcoming supercomputer and the United States’ first exascale system. Ungaro offers some insight on the technology, what makes exascale performance possible, and why we’re going to need it. “It is an exciting testament to Shasta’s flexible design and unique system and software capabilities, along with our Slingshot interconnect, which will be the foundation for Argonne’s extreme-scale science endeavors and data-centric workloads. Shasta is designed for this transformative exascale era and the convergence of artificial intelligence, analytics and modeling and simulation– all at the same time on the same system — at incredible scale.”

PASC19 Preview: Brueckner and Dr. Curfman-McInnes to Moderate Exascale Panel Discussion

Today the PASC19 Conference announced that Dr. Lois Curfman McInnes from Argonne and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond. “In this session, Lois Curfman McInnes from Argonne National Laboratory and Rich Brueckner from insideHPC will moderate a panel discussion with thought leaders focused on software challenges for Exascale and beyond – mixing “big picture” and technical discussions. McInnes will bring her unique perspective on emerging Exascale software ecosystems to the table, while Brueckner will illustrate the benefits of Exascale to world-wide audiences.”

European Commission Funds 10 Centers of Excellence for HPC

The European Commission has approved a multi-million funding program for developing applications in High Performance Computing. The funds will be used to help build HPC Centers of Excellence in 10 member countries. From computing the reduction of noise and fuel for passenger airplanes to assessing the effects of climate change – Applications in High Performance […]

Video: Intel and Cray to Build First USA Exascale Supercomputer for DOE in 2021

Today Intel announced plans to deliver the first exaflop supercomputer in the United States. The Aurora supercomputer will be used to dramatically advance scientific research and discovery. The contract is valued at more than $500 million and will be delivered to Argonne National Laboratory by Intel and sub-contractor Cray in 2021. “Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO.”

Podcast: ECP EXAALT Program Extends the Reach of Molecular Dynamics

Computationally, EXAALT’s goal is to develop a comprehensive molecular dynamics capability for exascale. “The user should be able to say, ‘I’m interested in this kind of system size, timescale, and accuracy,’ and directly access the regime without being constrained by the usual scaling paths of current codes,” said Danny Perez of Los Alamos National Laboratory (LANL) and the EXAALT team.

Video: The Game Changing Post-K Supercomputer for HPC, Big Data, and Ai

Satoshi Matsuoka from RIKEN gave this talk at the Rice Oil & Gas Conference. “Rather than to focus on double precision flops that are of lesser utility, rather Post-K, especially its Arm64fx processor and the Tofu-D network is designed to sustain extreme bandwidth on realistic applications including those for oil and gas, such as seismic wave propagation, CFD, as well as structural codes, besting its rivals by several factors in measured performance. Post-K is slated to perform 100 times faster on some key applications c.f. its predecessor, the K-Computer, but also will likely to be the premier big data and AI/ML infrastructure.”