Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Porting a Particle-in-Cell Code to Exascale Architectures

By Nils Heinonen on behalf of the Argonne Leadership Computing Facility As part of a series aimed at sharing best practices in preparing applications for Aurora, we highlight researchers’ efforts to optimize codes to run efficiently on graphics processing units. Take advantage of upgrades being made to high-level, non-machine-specific libraries and programming models Developed in […]

Meet the Frontier Exascale Supercomputer: How Big Is a Quintillion?

Are all comparisons so odious, really? Some can illuminate, some can awe. HPE-Cray has put out an infographic about its Frontier exascale supercomputer, the U.S.’s first, scheduled to be shipped to Oak Ridge National Laboratory later this year. It’s got interesting comparisons that shed light on how big a quintillion is. Make that 1.5 quintillion, […]

Clacc – Open Source OpenACC Compiler and Source Code Translation Project

By Rob Farber, contributing writer for the Exascale Computing Project Clacc is a Software Technology development effort funded by the US Exascale Computing Project (ECP) PROTEAS-TUNE project to develop production OpenACC compiler support for Clang and the LLVM Compiler Infrastructure Project (LLVM). The Clacc project page notes, “OpenACC support in Clang and LLVM will facilitate the programming of GPUs and other accelerators in DOE applications, […]

Ungaro Departs: New HPC Leadership at HPE and Dell as Companies Vie for Server Top Spot

HPE and Dell, engaged in a neck-and-neck struggle for superiority in HPC servers, also are engaged in a reshuffling of their HPC leadership teams that may reflect similar visions of HPC’s evolving position across IT. Last week, we reported on Dell’s newly installed HPC management group following the departure in early February of Thierry Pellegrino, […]

NERSC, ALCF, Codeplay Partner on SYCL GPU Compiler

The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL) and Argonne Leadership Computing Facility (ALCF) are working with Codeplay Software to enhance the LLVM SYCL GPU compiler capabilities for Nvidia A100 GPUs. The collaboration is designed to help NERSC and ALCF users, along with the HPC community in general, produce […]

ECP: SuperLU Library Speeds Direct Solution of Large Sparse Linear Systems on HPC and Exascale Hardware

HPC and AI technology consultant and author Rob Farber wrote this article on behalf of the Exascale Comuting Project. Lower-upper (LU) factorization is an important numerical algorithm used to solve systems of linear equations in science and engineering.[i] These linear systems of equations can be expressed as a matrix, which is then passed to a […]

Spotting HPC and Exascale Bottlenecks with TAU CPU/GPU/MPI Profiler

Programmers cannot blindly guess which sections of their code might bottleneck performance. This problem is worsened when codes run across the variety of hardware platforms supported by the Exascale Computing Project (ECP). A section of code that runs well on one system might be a bottleneck on another system. Differing hardware execution models further compound the performance challenges that face application developers; these models can include the somewhat restricted SIMD (Single Instruction Multiple Data) and SIMT (Single Instruction Multiple Thread) computing for GPU models and the more complex and general MIMD (Multiple Instruction Multiple Data) for CPUs. New software programming models, such as Kokkos, also introduce multiple layers of abstraction and lambda functions that can hide or obscure the low-level execution details due to their complexity and anonymous nature. Differing memory systems inside a node and differences in the communications fabric that connect high-performance computing (HPC) nodes in a distributed supercomputer environment add even greater challenges in identifying performance bottlenecks during application performance analysis.

Exascale Computing Project: Researchers Accelerate I/O with Novel Processing Method

January 25, 2021 — Researchers funded by the Exascale Computing Project (ECP) have delivered a novel method that addresses overloaded communication processes that use MPI-IO by adding a second I/O request aggregation layer, according to ECP. Their method, called TAM for two-phase aggregation method, combines data on a node prior to performing additional internode optimizations […]

‘Let’s Talk Exascale’: How Supercomputing Is Shaking Up Earthquake Science

Supercomputing is bringing seismic change to earthquake science. A field that historically has predicted by looking back now is moving forward with HPC and physics-based models to comprehensively simulate the earthquake process, end to end. In this episode of the “Let’s Talk Exascale” podcast series from the U.S. Department of Energy’s Exascale Computing Project (ECP), David McCallen, leader of ECP’s Earthquake Sim (EQSIM) subproject, discusses his team’s work to help improve the design of more quake-resilient buildings and bridges.

What May Come from Exascale? Improved Medicines, Longer-range Batteries, Better Control of 3D Parts, for Starters

As Exascale Day (Oct. 18) approaches, we thought it appropriate to post a recent article from Scott Gibson of the Exascale Computing Project (ECP), an overview of the anticipated advances in scientific discovery enabled by exascale-class supercomputers. Much of this research will focus on atomic physics and its impact on such areas as catalysts used in industrial conversion, molecular dynamics simulations and quantum mechanics used to develop new materials for improved medicines, batteries, sensors and computing devices.