Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


IBM Announces New AI Hardware Research, Red Hat Collaborations

At the IEEE CAS/EDS AI Compute Symposium, IBM Research introduced new technology and partnerships designed to dynamically run massive AI workloads in hybrid clouds: The company said it is developing analog AI, combining compute and memory in a single device designed to alleviate “the von Neumann bottleneck,” a limitation resulting from traditional hardware architectures in […]

Transform Your Business with the Next Generation of Accelerated Computing

In this white paper, you’ll find a compelling discussion regarding how Supermicro servers optimized for NVIDIA A100 GPUs are solving the world’s greatest HPC and AI challenges. As the expansion of HPC and AI poses mounting challenges to IT environments, Supermicro and NVIDIA are equipping organizations for success, with world-class solutions to empower business transformation. The Supermicro team is continually testing and validating advanced hardware featuring optimized software components to support a rising number of use cases.

HPE Cray EX with AMD CPUs-GPUs to Deliver 552PFLOPS for Finland’s CSC

The HPE-AMD supercomputing tandem has had a bang-out week for systems wins and installations – and it’s only Wednesday. On Monday, Australia’s Pawsey Supercomputing Centre announced HPE has been awarded a $48 AUD million systems contract. Yesterday, Los Alamos National Lab said it has stood up “Chicoma,” based on AMD processors and the HPE Cray […]

Practical Hardware Design Strategies for Modern HPC Workloads – Part 2

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Los Alamos Stands up HPE Cray EX for COVID-19 Fight

Los Alamos National Laboratory reported it has completed the installation of “Chicoma,” based on AMD EPYC CPUs and the HPE Cray EX supercomputer architecture. The HPC platform is aimed at enhancing the lab’s R&D efforts in support of COVID-19 research. Chicoma is an early deployment of HPE Cray EX, which offers a large-scale system architecture […]

HPE to build Australia’s No. 1 Supercomputer at Pawsey Supercomputing Centre

Hewlett Packard Enterprise (HPE) today announced it was awarded a $48 AUD million contract to build a new supercomputer for Pawsey Supercomputing Centre, one of Australia’s leading national supercomputing centers, located in Western Australia. The new supercomputer is part of the Pawsey Capital Refresh Program, which is a $70 AUD million program funded by the Australian government to […]

Getting to Exascale: Nothing Is Easy

In the weeks leading to today’s Exascale Day observance, we set ourselves the task of asking supercomputing experts about the unique challenges, the particularly vexing problems, of building a computer capable of 10,000,000,000,000,000,000 calculations per second. Readers of this publication might guess, given Intel’s trouble producing the 7nm “Ponte Vecchio” GPU for its delayed Aurora system for Argonne National Laboratory, that compute is the toughest exascale nut to crack. But according to the people we interviewed, the difficulties of engineering exascale-class supercomputing run the systems gamut. As we listened to exascale’s daunting litany of technology difficulties….

Practical Hardware Design Strategies for Modern HPC Workloads

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

EuroHPC: 4 Nvidia-based AI Supercomputers Coming from Atos, HPE; 4 More on Way

Four new supercomputers backed by a pan-European initiative will use Nvidia data center accelerators, networks and software for AI and high-performance computing. They include a system dubbed Leonardo, unveiled today at Italy’s CINECA research center, using Nvidia technologies to deliver what the company said is the world’s most powerful AI system. The four systems are […]

Where Have You Gone, IBM?

The company that built the world’s nos. 2 and 3 most powerful supercomputers is to all appearances backing away from the supercomputer systems business. IBM, whose Summit and Sierra CORAL-1 systems set the global standard for pre-exascale supercomputing, failed to win any of the three exascale contracts, and since then IBM has seemingly withdrawn from the HPC systems field. This has been widely discussed within the HPC community for at least the last 18 months. In fact, an industry analyst told us that as long ago as the annual ISC Conference in Frankfurt four years ago, he was shocked when IBM told him the company was no longer interested in the HPC business per se….