Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

The Whole World Is Watching: ORNL’s Bernholdt & Programming Environment Team Prepare for Frontier and Exascale

The world’s fastest supercomputer comes with some assembly required. Frontier, the nation’s first exascale computing system, won’t come together as a whole until all pieces arrive at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory to be installed—with the eyes of the world watching—on the data center floor inside the Oak Ridge Leadership Computing Facility (OLCF). Once those components operate in harmony as advertised, David Bernholdt and his team can take time for a quick bow—and then get back to work.

Preparing for Exascale: Aurora Software Development – Packaging and Early Hardware

By coordinating efforts to improve early exascale hardware stability at the Argonne Leadership Computing Facility (ALCF), computer scientist Servesh Muralidharan is working to make it easier for application developers to use Aurora testbeds at Argonne’s Joint Laboratory for System Evaluation (JLSE). His work will facilitate a faster transition to the Aurora system upon its delivery and help […]

Los Alamos in R&D Pact with Quantum Computing Inc. for Exascale and Petascale Simulations

Quantum Computing Inc., a Leesburg, VA-based company focused on bridging classical and quantum computing, today announced a three-year cooperative research and development agreement with Los Alamos National Laboratory. QCI will collaborate with Los Alamos scientists through its administrator, Triad National Security, LLC, on a key component of large-scale simulations that are critical for a range […]

ECP Pushes Cross-Platform Tested Compilers for HPC and Exascale Architectures

The Exascale Computing Project (ECP) is working to combine two key technologies, LLVM and continuous integration (CI), to ensure that current and future compilers are stable and performant on high-performance computing (HPC) and exascale computer systems. The proliferation of new machine architectures has made the continuous testing and verification of software (hence the “continuous” in CI) an essential part of US Department of Energy DOE supercomputing. Valentin Clement, a software engineer at Oak Ridge National Laboratory who is part of the team working to include LLVM in the ECP CI testing and verification framework, notes, “We are working to add CI for ECP-relevant architectures. This facilitates….

OLCF Releases Storage Specs for Frontier Exascale

A newly enhanced I/O subsystem will support the nation’s first exascale supercomputer, the HPE Cray Frontier system, and the Oak Ridge Leadership Computing Facility (OLCF). The computational might of exascale computing, expected to have top speeds of 1 quintillion — that’s 1018, or a billion billion — calculations per second, promises to enable breakthrough discoveries across the scientific spectrum when Frontier, set to power up by year’s end, opens to full user operations in 2022, from the basics of building better nuclear reactors to insights into the origins of the universe. The I/O subsystem will consist of two major components: an in-system storage layer and a center-wide file system. The center-wide file system, called Orion, will use open-source Lustre and ZFS technologies.

Let’s Talk Exascale – ECP Director Kothe Provides Project Update

In this May 2021 interview, ECP Director Doug Kothe provides an update on the effort to deliver a capable and sustainable exascale computing ecosystem for the nation. New audio podcast interview with Kothe. https://soundcloud.com/exascale-computing-project/episode-80-update-with-doug-kothe-progress-annual-meeting-highlights-and-more Video interview here:  https://www.exascaleproject.org/update-with-doug-kothe-progress-annual-meeting-highlights-and-more/ In the interview with Kothe , reference is made to this additional video of the portfolio managers in which […]

Porting a Particle-in-Cell Code to Exascale Architectures

By Nils Heinonen on behalf of the Argonne Leadership Computing Facility As part of a series aimed at sharing best practices in preparing applications for Aurora, we highlight researchers’ efforts to optimize codes to run efficiently on graphics processing units. Take advantage of upgrades being made to high-level, non-machine-specific libraries and programming models Developed in […]

Meet the Frontier Exascale Supercomputer: How Big Is a Quintillion?

Are all comparisons so odious, really? Some can illuminate, some can awe. HPE-Cray has put out an infographic about its Frontier exascale supercomputer, the U.S.’s first, scheduled to be shipped to Oak Ridge National Laboratory later this year. It’s got interesting comparisons that shed light on how big a quintillion is. Make that 1.5 quintillion, […]

Clacc – Open Source OpenACC Compiler and Source Code Translation Project

By Rob Farber, contributing writer for the Exascale Computing Project Clacc is a Software Technology development effort funded by the US Exascale Computing Project (ECP) PROTEAS-TUNE project to develop production OpenACC compiler support for Clang and the LLVM Compiler Infrastructure Project (LLVM). The Clacc project page notes, “OpenACC support in Clang and LLVM will facilitate the programming of GPUs and other accelerators in DOE applications, […]

Ungaro Departs: New HPC Leadership at HPE and Dell as Companies Vie for Server Top Spot

HPE and Dell, engaged in a neck-and-neck struggle for superiority in HPC servers, also are engaged in a reshuffling of their HPC leadership teams that may reflect similar visions of HPC’s evolving position across IT. Last week, we reported on Dell’s newly installed HPC management group following the departure in early February of Thierry Pellegrino, […]