Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

DOE’s Best Practices for HPC Software Developers Webinar Series — May 12

May 5, 2021 — The IDEAS Productivity project, in partnership with the DOE Computing Facilities of the ALCF, OLCF, and NERSC and the DOE Exascale Computing Project (ECP) will resume the webinar series on Best Practices for HPC Software Developers, begun in 2016. One-hour webinars on topics in scientific software development and high-performance computing, approximately once a […]

New Hypre Library Approach Brings GPU-Based Algebraic Multigrid to Exascale and HPC Community

First developed in 1998, the hypre team has adapted their cross-platform high performance library to support a variety of machine architectures over the years. Hypre supports scalable solvers and preconditioners that can be applied to large sparse linear systems on parallel computers.[i] Their latest work now gives scientists the ability to efficiently utilize modern GPU-based extreme scale parallel supercomputers to address many scientific problems.

Continuous Integration: The Path to the Future for HPC

By Rob Farber on behalf of the Exascale Computing Project The Exascale Computing Project (ECP) is investing heavily in software for the forthcoming exascale systems as can be seen in the many tools, libraries and software components that are already freely available for download via the Extreme-Scale Scientific Software Stack (E4S). In order for these software […]

Exascale Computing Project’s EQSIM Team Helps Assess Infrastructure Earthquake Risk

As part of the US Department of Energy’s Exascale Computing Project (ECP), the Earthquake Simulation (EQSIM) application development team is creating a computational tool set and workflow for earthquake hazard and risk assessment that moves beyond the traditional empirically based techniques which are dependent on historical earthquake data. With software assistance from the ECP’s software technology group, the EQSIM team is working to give scientists and engineers the ability to simulate full end-to-end earthquake processes.

Clacc – Open Source OpenACC Compiler and Source Code Translation Project

By Rob Farber, contributing writer for the Exascale Computing Project Clacc is a Software Technology development effort funded by the US Exascale Computing Project (ECP) PROTEAS-TUNE project to develop production OpenACC compiler support for Clang and the LLVM Compiler Infrastructure Project (LLVM). The Clacc project page notes, “OpenACC support in Clang and LLVM will facilitate the programming of GPUs and other accelerators in DOE applications, […]

ECP Announces 2021 Community Birds-of-a-Feather Days

March 1, 2021 – The US Department of Energy’s Exascale Computing Project (ECP) has announced the ECP 2021 Community BOF Days, Birds-of-a-Feather sessions for the high performance computing (HPC) community. BOFs offer various formats (e.g., panels, presentation, roundtable) bringing together people with shared interests and encouraging discussion and idea exchange. ECP’s BOF Days will take […]

ECP: SuperLU Library Speeds Direct Solution of Large Sparse Linear Systems on HPC and Exascale Hardware

HPC and AI technology consultant and author Rob Farber wrote this article on behalf of the Exascale Comuting Project. Lower-upper (LU) factorization is an important numerical algorithm used to solve systems of linear equations in science and engineering.[i] These linear systems of equations can be expressed as a matrix, which is then passed to a […]

Spotting HPC and Exascale Bottlenecks with TAU CPU/GPU/MPI Profiler

Programmers cannot blindly guess which sections of their code might bottleneck performance. This problem is worsened when codes run across the variety of hardware platforms supported by the Exascale Computing Project (ECP). A section of code that runs well on one system might be a bottleneck on another system. Differing hardware execution models further compound the performance challenges that face application developers; these models can include the somewhat restricted SIMD (Single Instruction Multiple Data) and SIMT (Single Instruction Multiple Thread) computing for GPU models and the more complex and general MIMD (Multiple Instruction Multiple Data) for CPUs. New software programming models, such as Kokkos, also introduce multiple layers of abstraction and lambda functions that can hide or obscure the low-level execution details due to their complexity and anonymous nature. Differing memory systems inside a node and differences in the communications fabric that connect high-performance computing (HPC) nodes in a distributed supercomputer environment add even greater challenges in identifying performance bottlenecks during application performance analysis.

ATPESC – Argonne Training Program on Extreme-Scale Computing – Sets March 1 Application Deadline

Argonne National Laboratory said today it has established a March 1 deadline to apply for an opportunity to learn the tools and techniques needed to carry out research on the world’s most powerful supercomputers. Applications are now being accepted for ATPESC 2021 — a two-week training program designed to teach the skills, approaches and tools to design, implement and execute […]

Let’s Talk Exascale: Getting Applications Aurora-Ready

This episode of Let’s Talk Episode from DOE’s Exascale Computing Project is the first in a series on best practices in preparing applications for the upcoming Aurora exascale supercomputer at the US Department of Energy’s Argonne National Laboratory. In these discussions, the emphasis will be on optimizing code to run on GPUs and providing developers […]