Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Practical Hardware Design Strategies for Modern HPC Workloads

This special research report sponsored by Tyan discusses practical hardware design strategies for modern HPC workloads. As hardware continued to develop, technologies like multi-core, GPU, NVMe, and others have allowed new application areas to become possible. These application areas include accelerator assisted HPC, GPU based Deep learning, and Big Data Analytics systems. Unfortunately, implementing a general purpose balanced system solution is not possible for these applications. To achieve the best price-to-performance in each of these application verticals, attention to hardware features and design is most important.

Where Have You Gone, IBM?

The company that built the world’s nos. 2 and 3 most powerful supercomputers is to all appearances backing away from the supercomputer systems business. IBM, whose Summit and Sierra CORAL-1 systems set the global standard for pre-exascale supercomputing, failed to win any of the three exascale contracts, and since then IBM has seemingly withdrawn from the HPC systems field. This has been widely discussed within the HPC community for at least the last 18 months. In fact, an industry analyst told us that as long ago as the annual ISC Conference in Frankfurt four years ago, he was shocked when IBM told him the company was no longer interested in the HPC business per se….

Lenovo Offers Optimal Storage Platform for Intel DAOS

In this sponsored post, our friends over at Lenovo and Intel highlight how Lenovo is doing some exciting stuff with Intel’s DAOS software. DAOS, or Distributed Asynchronous Object Storage, is a scale-out HPC storage stack that uses the object storage paradigm to bypass some of the limitations of traditional parallel file system architectures.

Why HPC and AI Workloads are Moving to the Cloud

This sponsored post from our friends over at Dell Technologies discusses a study by Hyperion Research finds that approximately 20 percent of HPC workloads are now running in the public cloud. There are many good reasons for this trend.

DOE Under Secretary for Science Dabbar’s Exascale Update: Frontier to Be First, Aurora to Be Monitored

As Exascale Day (October 18) approaches, U.S. Department of Energy Under Secretary for Science Paul Dabbar has commented on the hottest exascale question of the day: which of the country’s first three systems will be stood up first? In a recent, far-reaching interview with us, Dabbar confirmed what has been expected for more than two months, that the first U.S. exascale system will not, as planned, be the Intel-powered Aurora system at Argonne National Laboratory. It will instead be HPE-Cray’s Frontier, powered by AMD CPUs and GPUs and designated for Oak Ridge National Laboratory.

Taking Virtualization to a Higher Level at the University of Pisa

In this sponsored post, our friends over at Dell Technologies highlight a compelling case study: the University of Pisa gains greater flexibility and value from its IT infrastructure with widespread virtualization of resources, including high performance computing systems.

Composable Supercomputing Optimizes Hardware for AI-driven Data Calculation

In this sponsored post, our friend John Spiers, Chief Strategy Officer at Liqid, discusses how composable disaggregated infrastructure (CDI) solutions are emerging as a solution to roadblocks to advancing the mission of high-performance computing. CDI orchestration software dynamically composes GPUs, NVMe SSDs, FPGA, networking, and storage-class memory to create software-defined bare metal servers on demand. This enables unparalleled resource utilization to deliver previously impossible performance for AI-driven data analytics.

Accelerate Your Applications with ROCm

In this sponsored post by our friends over at AMD, discuss how the ROCm platform is designed so that a wide range of developers can develop accelerated applications. An entire eco-system has been created, allowing developers to focus on developing their leading-edge applications.

Get Your HPC Cluster Productive Faster

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), we see that by simplifying the deployment process from weeks or longer to days and preparing pre-built software packages, organizations can become productive in a much shorter time. Resources can be used to provide more valuable services to enable more research, rather than bringing up an HPC cluster. By using the services that QCT offers, HPC systems can achieve a better Return on Investment (ROI).

Fujitsu to Ship 649 TFLOPS System with Fugaku HPC Technology to Canon for ‘No-prototype’ Product Development

Technology from the world’s No. 1 supercomputer, Fugaku, located at the Riken Center for Computational Science in Japan, is making its way into the commercial sphere. Fujitsu Ltd. today announced that Canon, Inc., has ordered a Fujitsu PRIMEHPC FX1000 unit, expected to achieve theoretical computational performance of 648.8 teraflops (TFLOPS). Intended to support Canon’s “no-prototype” […]