Building a Capable Computing Ecosystem for Exascale

With ECP, working together was a prerequisite for participation. “From the beginning, the teams had this so-called ‘shared fate,’” says Siegel. When incorporating new capabilities, applications teams had to consider relevant software tools developed by others that could help meet their performance targets, and if they didn’t choose to use them….

Sept. 8 Registration Deadline for October ALCF Hands-on HPC Workshop

Aug. 31, 2023 — The Argonne Leadership Computing Facility will hold Hands-on HPC Workshop on October 10-12, 2023 at the TCS Conference Center at Argonne National Laboratory. Deadline for registration is Friday, Sept. 8. Registration information can be found here. The workshop will provide hands-on time on Polaris and AI Testbeds focusing on porting applications […]

LLNL: 9,000 Exascale Nodes for Power Grid Optimization

Ensuring the nation’s electrical power grid can function with limited disruptions in the event of a natural disaster, catastrophic weather or a manmade attack is a key national security challenge. Compounding the challenge of grid management is the increasing amount of renewable energy sources such as solar and wind that are continually added to the […]

Exascale: Pursuing Clean Energy Catalysts with Aurora

Argonne National Laboratory has announced that researchers are developing exascale software tools to enable the design of new chemicals and chemical processes for clean energy production. ​Argonne is building one of the nation’s first exascale systems, Aurora. To prepare codes for the architecture and scale of the new supercomputer, 15 research teams are taking part […]

El Capitan Supercomputer Installation at Livermore Has Begun

With pictures to prove it, Lawrence Livermore National Laboratory announced today it has begun receiving and installing components for El Capitan, what is expected to be the third exascale-class supercomputer in the U.S. The laboratory shared photos on social media today. Projected to exceed 2 exaflops, El Capitan will likely be the most powerful supercomputer […]

Intel Announces Installation of Aurora Blades Is Complete, Expects System to be First to Achieve 2 ExaFLOPS

Intel today announced the Aurora exascale-class supercomputer at Argonne National Laboratory is now fully equipped with 10,624 compute blades. Putting a stake in the ground, Intel said in its announcement that “later this year, Aurora is expected to be the world’s first supercomputer to achieve a theoretical peak performance of more than 2 exaflops … […]

Jules Verne Consortium in France Will Host 2nd EuroHPC Exascale Supercomputer

June 20, 2023 — The European High Performance Computing Joint Undertaking (EuroHPC JU) has selected the Jules Verne Consortium to host and operate in France the second EuroHPC exascale supercomputer to exceed the threshold of 1 billion billion calculations per second. This new exascale supercomputer will be managed by GENCI (as hosting entity), the French […]

ALCF Developer Session May 24: Preparing XGC and HACC to Run on the Aurora Exascale Supercomputer

May 1, 2023 — An Argonne Leadership Computing Facility (ALCF) Developer Session will be held from 11-noon CT on Wednesday, May 24, 2023 on porting strategies for ALCF’s upcoming Aurora exascale-class supercomputer for two applications: the XGC gyrokinetic plasma physics code and the HACC cosmology code. Registration is here. Speakers will be Esteban Rangle, assistant […]

HPE DoD Webinar April 27: Bringing Exascale HPC to the Masses

There will be an HPE DoD webinar on Thursday, April 27 from 2-3 pm ET entitled “Bringing Exascale to the Masses.” The speaker will be Steve Heibein, public sector AI chief technologist, HPE. For registration information, go here. This webinar will present the latest hardware platforms bringing exascale features to rack-scale, server-scale and the tactical […]

insideHPC-Hyperion Research Interview: Argonne’s Rick Stevens on the Future of Everything – U.S. Post-Exascale Strategy, AI for Science, HPC in 2040 and an Aurora Install Update

In this interview conducted on behalf of HPC analyst firm Hyperion Research, we spoke with Argonne National Laboratory’s Rick Stevens about the present and future of HPC. The starting point for this conversation is a presentation Stevens gave at a Hyperion event in Washington related to implementation of the CHIPS and Science Act and includes his insights on the post-exascale build-out of an integrated network of U.S. supercomputing capacity (the Integrated Research Infrastructure, or IRI). We then look at AI for science and the use of data-driven modeling and simulation, which shows the potential to deliver major performance gains for researchers….