Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Exascale Exasperation: Why DOE Gave Intel a 2nd Chance; Can Nvidia GPUs Ride to Aurora’s Rescue?

The most talked-about topic in HPC these days – another Intel chip delay and therefore delay of the U.S.’s flagship Aurora exascale system – is something no one directly involved wants to talk about. Not Argonne National Laboratory, where Intel was to install Aurora in 2021; not the Department of Energy’s Exascale Computing Project, guiding […]

ORNL Offers Virtual Tour of Summit’s Supercomputer Center

Oak Ridge National Lab has released a virtual tour of the facility that houses Summit, the world’s second most power supercomputer. The tour is offered by the National Center for Computational Sciences (NCCS) and the Oak Ridge Leadership Computing Facility (OLCF). They’re giving access to Building 5600 on the Oak Ridge campus where resides Summit, […]

Inspur NF5488A5 Breaks AI Server Performance Record in Latest MLPerf Benchmarks

San Jose, Aug. 5 – In the results released last week of MLPerf AI benchmark, Inspur NF5488A5 server set a new AI performance record in the Resnet50 training task, topping the list for single server performance. MLPerf (results here) is the most influential industry benchmarking organization in the field of AI around the world. Established […]

Another Intel 7nm Chip Delay – What Does it Mean for Aurora Exascale?

The saga of Intel’s inabilities to deliver a 7nm process chip and a supercomputer called Aurora to Argonne National Laboratory opened new chapters yesterday with Intel CEO Bob Swan’s statements that the company’s 7nm “Ponte Vecchio” GPU, integral to its Aurora exascale system scheduled for delivery next year, will be delayed at least six months. […]

University of Florida, Nvidia Plan Fastest AI Supercomputer in Academia

The University of Florida and Nvidia have unveiled a plan to build what they say will be the world’s fastest AI supercomputer in academia, delivering 700 petaflops of AI performance and infusing AI throughout UF’s curriculum. The $70 million project will fund construction of an AI-centric supercomputing and data center and is intended to make […]

Video: VMware Talks GPU Virtualization, Sharing for AI/ML

In this video, Mike Adams, VMware’s Senior Director CPBU, AI/ML Market Development, talks with us about a new, integrated VMware vSphere 7 feature enabling “elastic infrastructure” on-demand for AI and machine learning. VMware vSphere Bitfusion is a result of VMware’s 2019 acquisition of Bitfusion, developer of virtualized hardware accelerators, including GPUs, used to improve performance of AI/ML workloads.

Dell Technologies HPC Community Interview: Bob Wisniewski, Intel’s Chief HPC Architect, Talks Aurora and Getting to Exascale

We’re recognizing that HPC is expanding to include AI. But it’s not just AI, it is big data and edge, too. Many of the large scientific instruments are turning out huge amounts of data that need to be analyzed in real time. And big data is no longer limited to the scientific instruments – it’s all the weather stations and all the smart city sensors generating massive amounts of data. As a result, HPC is facing a broader challenge and Intel realizes that a single hardware solution is not going to be right for everybody.

Lenovo Standing Up Liquid-cooled Neptune System at Max Planck Society

Lenovo is installing a Neptune liquid cooled supercomputer at the Max Planck Society, a delivery that began two months ago and is scheduled to be completed early next year. The €20 million project includes a 100,000-core Neptune comprised of Lenovo ThinkSystem servers with  Intel CPUs (unspecified) and Nvidia Tesla A100 GPUs, software and operational support, […]

Google Unveils 1st Public Cloud VMs using Nvidia Ampere A100 Tensor GPUs

Google today introduced the Accelerator-Optimized VM (A2) instance family on Google Compute Engine based on the NVIDIA Ampere A100 Tensor Core GPU, launched in mid-May. Available in alpha and with up to 16 GPUs, A2 VMs are the first A100-based offering in a public cloud, according to Google. At its launch, Nvidia said the A100, built on the company’s new Ampere architecture, delivers “the greatest generational leap ever,” according to Nvidia, enhancing training and inference computing performance by 20x over its predecessors.

Reports: TSMC May Commercialize Production of Cerebras-style AI Supercomputing Chips

Published reports state that TSMC (Taiwan Semiconductor Manufacturing Company) may begin commercial production within two years of specialized supercomputer AI chips, an outgrowth of the company’s customized fabrication of the Wafer Scale Engine (WSE) developed by AI start-up Cerebras Systems. Last August, Cerebras unveiled the WSE (price: US$2 million), which it said is the largest […]