Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Podcast: WarpX exascale application to accelerate plasma accelerator research

In this Let’s Talk Exascale podcast, researchers from LBNL discuss how the WarpX project are developing an exascale application for plasma accelerator research. “The new breeds of virtual experiments that the WarpX team is developing are not possible with current technologies and will bring huge savings in research costs, according to the project’s summary information available on ECP’s website. The summary also states that more affordable research will lead to the design of a plasma-based collider, and even bigger savings by enabling the characterization of the accelerator before it is built.”

Podcast: ZFP Project looks to Reduce Memory Footprint and Data Movement on Exascale Systems

In this Let’s Talk Exascale podcast, Peter Lindstrom from Lawrence Livermore National Laboratory describes how the ZFP project will help reduce the memory footrprint and data movement in Exascale systems. “To perfom those computations, we oftentimes need random access to individual array elements,” Lindstrom said. “Doing that, coupled with data compression, is extremely challenging.”

NERSC Rolls Out New Community File System for Next-Gen HPC

NERSC recently unveiled their new Community File System (CFS), a long-term data storage tier developed in collaboration with IBM that is optimized for capacity and manageability. “In the next few years, the explosive growth in data coming from exascale simulations and next-generation experimental detectors will enable new data-driven science across virtually every domain. At the same time, new nonvolatile storage technologies are entering the market in volume and upending long-held principles used to design the storage hierarchy.”

Podcast: Bright Computing forges eX3 at Simula Research Laboratory

The eX3 infrastructure allows Norwegian HPC researchers and their international collaborators to explore bleeding-edge hardware and software that will be instrumental to the coming generation of supercomputers. “Simula chose Bright Cluster Manager to provide comprehensive management of eX3, enabling the organization to administer its HPC platform as a single entity; provisioning the hardware, operating systems and workload managers from a unified interface.”

EPEEC Project Fosters Heterogeneous HPC Programming in Europe

The European Programming Environment for Programming Productivity of Heterogeneous Supercomputers (EPEEC) is a project that aims to combine European made tools for programming models and performance tools that could help to relieve the burden of targeting highly-heterogeneous supercomputers. It is hoped that this project will make researchers jobs easier as they can more effectively use large scale HPC systems.

Podcast: Accelerating the Adoption of Container Technologies for Exascale Computing

In this Let’s Talk Exascale podcast, Andrew Younge from Sandia National Laboratories describes the new SuperContainers project, which aims to deliver containers and virtualization technologies for productivity, portability, and performance on the first exascale computing machines are planned for 2021. “Essentially, containers allow you to encompass your entire environment in a simple and reproducible way,” says Younge. “So not only do I have my container image that has my application and my entire software stack with it, I also have a manifest for how I got there. That’s a really important notion for many people.”

Video: What Does it Take to Reach 2 Exaflops?

In this video, Addison Snell from Intersect360 Research moderates a panel discussion on the El Capitan supercomputer. With a peak performance of over 2 Exaflops, El Capitan will be roughly 10x faster than today’s fastest supercomputer and more powerful than the current Top 200 systems — combined! “Watch this webcast to learn from our panel of experts about the National Nuclear Security Administration’s requirements and how the Exascale Computing Project helped drive the hardware, software, and collaboration needed to achieve this milestone.”

Scientists Look to Exascale and Deep Learning for Developing Sustainable Fusion Energy

Scientists from Princeton Plasma Physics Laboratory are leading an Aurora ESP project that will leverage AI, deep learning, and exascale computing power to advance fusion energy research. “With a suite of the world’s most powerful path-to-exascale supercomputing resources at their disposal, William Tang and colleagues are developing models of disruption mitigation systems (DMS) to increase warning times and work toward eliminating major interruption of fusion reactions in the production of sustainable clean energy.”

Exascale Computing Project Releases Milestone Report

The US Department of Energy’s Exascale Computing Project (ECP) has published a milestone report that summarizes the status of all thirty ECP Application Development (AD) subprojects at the end of fiscal year 2019. “This report contains not only an accurate snapshot of each subproject’s current status but also represents an unprecedentedly broad account of experiences porting large scientific applications to next-generation high-performance computing architectures.”

Video: The Cray Shasta Architecture for the Exascale Era

Steve Scott from HPE gave this talk at the Rice Oil & Gas Conference. “With the announcement of multiple exascale systems, we’re now entering the Exascale Era, marked by several important trends. This talk provides an overview of the Cray Shasta system architecture, which was motivated by these trends, and designed for this new heterogeneous, data-driven world.”