Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: What Does it Take to Reach 2 Exaflops?

In this video, Addison Snell from Intersect360 Research moderates a panel discussion on the El Capitan supercomputer. With a peak performance of over 2 Exaflops, El Capitan will be roughly 10x faster than today’s fastest supercomputer and more powerful than the current Top 200 systems — combined! “Watch this webcast to learn from our panel of experts about the National Nuclear Security Administration’s requirements and how the Exascale Computing Project helped drive the hardware, software, and collaboration needed to achieve this milestone.”

Podcast: A Look inside the El Capitan Supercomputer coming to LLNL

In this podcast, the Radio Free HPC team looks at some of the more interesting configuration aspects of the pending El Capitan exascale supercomputer coming to LLNL in 2023. “Dan talks about the briefing he received on the new Lawrence Livermore El Capitan system to be built by HPE/Cray. This new $600 million system will be fueled by the AMD Genoa processor coupled with AMD’s Instinct GPUs. Performance should come in at TWO 64-bit exaflops peak, which is very, very sporty.”

UKRI Awards ARCHER2 Supercomputer Services Contract

UKRI has awarded contracts to run elements of the next national supercomputer, ARCHER2, which will represent a significant step forward in capability for the UK’s science community. ARCHER2 is provided by UKRI, EPCC, Cray (an HPE company) and the University of Edinburgh. “ARCHER2 will be a Cray Shasta system with an estimated peak performance of 28 PFLOP/s. The machine will have 5,848 compute nodes, each with dual AMD EPYC Zen2 (Rome) 64 core CPUs at 2.2GHz, giving 748,544 cores in total and 1.57 PBytes of total system memory.”

Video: The Cray Shasta Architecture for the Exascale Era

Steve Scott from HPE gave this talk at the Rice Oil & Gas Conference. “With the announcement of multiple exascale systems, we’re now entering the Exascale Era, marked by several important trends. This talk provides an overview of the Cray Shasta system architecture, which was motivated by these trends, and designed for this new heterogeneous, data-driven world.”

Podcast: Slingshotting to Exascale

In this podcast, the Radio Free HPC team looks at the Cray Slingshot interconnect that will power all three of the first Exascale supercomputers in the US. “At the heart of this new interconnect is their innovative 64 port switch that provides a maximum of 200 Gb/s per port and can support Cray’s enhanced Ethernet along with standard Ethernet message passing. It also has advanced congestion control and quality of service modes that ensure that each job gets their right amount of bandwidth.”

AMD to Power 2 Exaflop El Capitan Supercomputer from HPE

Today HPE announced that it will deliver the world’s fastest exascale-class supercomputer for NNSA at a record-breaking speed of 2 exaflops – 10X faster than today’s most powerful supercomputer. ‘El Capitan is expected to be delivered in early 2023 and will be managed and hosted by LLNL for use by the three NNSA national laboratories: LLNL, Sandia, and Los Alamos. The system will enable advanced simulation and modeling to support the U.S. nuclear stockpile and ensure its reliability and security.”

How the Titan Supercomputer was Recycled

In this special guest feature, Coury Turczyn from ORNL tells the untold story of what happens to high end supercomputers like Titan after they have been decommissioned. “Thankfully, it did not include a trip to the landfill. Instead, Titan was carefully removed, trucked across the country to one of the largest IT asset conversion companies in the world, and disassembled for recycling in compliance with the international Responsible Recycling (R2) Standard. This huge undertaking required diligent planning and execution by ORNL, Cray (a Hewlett Packard Enterprise company), and Regency Technologies.”

MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”

AMD to Power Cray Shasta Supercomputer at Navy DSRC

The Department of Defense High Performance Computing Modernization Program (HPCMP) is upgrading its supercomputing capabilities with a new Cray Shasta system powered by AMD EPYC processors. The system, the HPCMP’s first with more than 10 PetaFLOPS of peak computational performance, will be installed at the Navy’s DSRC’s facility at Stennis Space Center, Mississippi and will serve users from all of the services and agencies of the Department.

Bright Computing adds more than 100 new customers In 2019

Commercial enterprises, research universities and government agencies are turning to Bright Cluster Manager to reduce complexity and increase flexibility of their high-performance clusters. Along these lines, the company just announced the addition of more than 100 organizations to its client list in 2019, including AMD, Caterpillar, GlaxoSmithKline, Saab, Northrop Grumman, Trek Bicycles, Samsung, General Dynamics, Lockheed Martin and BAE, as well as 19 government agencies and 28 leading universities.