UKRI Awards ARCHER2 Supercomputer Services Contract

UKRI has awarded contracts to run elements of the next national supercomputer, ARCHER2, which will represent a significant step forward in capability for the UK’s science community. ARCHER2 is provided by UKRI, EPCC, Cray (an HPE company) and the University of Edinburgh. “ARCHER2 will be a Cray Shasta system with an estimated peak performance of 28 PFLOP/s. The machine will have 5,848 compute nodes, each with dual AMD EPYC Zen2 (Rome) 64 core CPUs at 2.2GHz, giving 748,544 cores in total and 1.57 PBytes of total system memory.”

Video: The Cray Shasta Architecture for the Exascale Era

Steve Scott from HPE gave this talk at the Rice Oil & Gas Conference. “With the announcement of multiple exascale systems, we’re now entering the Exascale Era, marked by several important trends. This talk provides an overview of the Cray Shasta system architecture, which was motivated by these trends, and designed for this new heterogeneous, data-driven world.”

Podcast: Slingshotting to Exascale

In this podcast, the Radio Free HPC team looks at the Cray Slingshot interconnect that will power all three of the first Exascale supercomputers in the US. “At the heart of this new interconnect is their innovative 64 port switch that provides a maximum of 200 Gb/s per port and can support Cray’s enhanced Ethernet along with standard Ethernet message passing. It also has advanced congestion control and quality of service modes that ensure that each job gets their right amount of bandwidth.”

AMD to Power 2 Exaflop El Capitan Supercomputer from HPE

Today HPE announced that it will deliver the world’s fastest exascale-class supercomputer for NNSA at a record-breaking speed of 2 exaflops – 10X faster than today’s most powerful supercomputer. ‘El Capitan is expected to be delivered in early 2023 and will be managed and hosted by LLNL for use by the three NNSA national laboratories: LLNL, Sandia, and Los Alamos. The system will enable advanced simulation and modeling to support the U.S. nuclear stockpile and ensure its reliability and security.”

How the Titan Supercomputer was Recycled

In this special guest feature, Coury Turczyn from ORNL tells the untold story of what happens to high end supercomputers like Titan after they have been decommissioned. “Thankfully, it did not include a trip to the landfill. Instead, Titan was carefully removed, trucked across the country to one of the largest IT asset conversion companies in the world, and disassembled for recycling in compliance with the international Responsible Recycling (R2) Standard. This huge undertaking required diligent planning and execution by ORNL, Cray (a Hewlett Packard Enterprise company), and Regency Technologies.”

MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”

AMD to Power Cray Shasta Supercomputer at Navy DSRC

The Department of Defense High Performance Computing Modernization Program (HPCMP) is upgrading its supercomputing capabilities with a new Cray Shasta system powered by AMD EPYC processors. The system, the HPCMP’s first with more than 10 PetaFLOPS of peak computational performance, will be installed at the Navy’s DSRC’s facility at Stennis Space Center, Mississippi and will serve users from all of the services and agencies of the Department.

Bright Computing adds more than 100 new customers In 2019

Commercial enterprises, research universities and government agencies are turning to Bright Cluster Manager to reduce complexity and increase flexibility of their high-performance clusters. Along these lines, the company just announced the addition of more than 100 organizations to its client list in 2019, including AMD, Caterpillar, GlaxoSmithKline, Saab, Northrop Grumman, Trek Bicycles, Samsung, General Dynamics, Lockheed Martin and BAE, as well as 19 government agencies and 28 leading universities.

NOAA to triple weather and climate supercomputing capacity

The United States is reclaiming a global top spot in high performance computing to support weather and climate forecasts. NOAA, part of the Department of Commerce, today announced a significant upgrade to compute capacity, storage space, and interconnect speed of its Weather and Climate Operational Supercomputing System. This upgrade keeps the agency’s supercomputing capacity on par with other leading weather forecast centers around the world.

Isambard 2 at UK Met Office to be largest Arm supercomputer in Europe

The  UK Met Office  been awarded £4.1m by EPSRC to create Isambard 2, the largest Arm-based supercomputer in Europe. The powerful new £6.5m facility, to be hosted by the Met Office in Exeter and utilized by the universities of Bath, Bristol, Cardiff and Exeter, will double the size of GW4 Isambard, to 21,504 high performance cores and 336 nodes. “Isambard 2 will incorporate the latest novel technologies from HPE and new partner Fujitsu, including next-generation Arm CPUs in one of the world’s first A64fx machines from Cray.”