Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


2020 OpenFabrics Alliance Workshop – Video Gallery

Welcome to the 2020 OpenFabrics Workshop video gallery. The OpenFabrics Alliance (OFA) is focused on accelerating development of high performance fabrics. The annual OFA Workshop, held in virtual format this year, is a premier means of fostering collaboration among those who develop fabrics, deploy fabrics, and create applications that rely on fabrics. It is the […]

Flatiron Institute Expands beyond 1000 Nodes with Bright Computing Cluster Management Software

The Flatiron Institute, New York, a community of scientists using modern computational tools to advance the basic sciences, is deploying a 320-node addition to its research cluster that will be managed by cluster management software from Bright Computing, maker of Linux cluster automation and management platform for HPC and machine learning. The implementation at Flatiron, […]

Dell Technologies HPC Community Interview: Bob Wisniewski, Intel’s Chief HPC Architect, Talks Aurora and Getting to Exascale

We’re recognizing that HPC is expanding to include AI. But it’s not just AI, it is big data and edge, too. Many of the large scientific instruments are turning out huge amounts of data that need to be analyzed in real time. And big data is no longer limited to the scientific instruments – it’s all the weather stations and all the smart city sensors generating massive amounts of data. As a result, HPC is facing a broader challenge and Intel realizes that a single hardware solution is not going to be right for everybody.

Google Unveils 1st Public Cloud VMs using Nvidia Ampere A100 Tensor GPUs

Google today introduced the Accelerator-Optimized VM (A2) instance family on Google Compute Engine based on the NVIDIA Ampere A100 Tensor Core GPU, launched in mid-May. Available in alpha and with up to 16 GPUs, A2 VMs are the first A100-based offering in a public cloud, according to Google. At its launch, Nvidia said the A100, built on the company’s new Ampere architecture, delivers “the greatest generational leap ever,” according to Nvidia, enhancing training and inference computing performance by 20x over its predecessors.

The Hyperion-insideHPC Interviews: Rich Brueckner and Doug Ball Talk CFD, Autonomous Mobility and Driving Down HPC Package Sizing

Doug Ball is a leading expert in computational fluid dynamics and aerodynamic engineering, disciplines he became involved with more than 40 years ago. In this interview with the late Rich Brueckner of insideHPC, Ball discusses the increased scale and model complexity that HPC technology has come to handle and, looking to the future, his anticipation […]

SeRC Turns to oneAPI Multi-Chip Programming Model for Accelerated Research

At ISC 2020 Digital, the Swedish e-Science Research Center (SeRC), Stockholm, has announced plans to use Intel’s oneAPI unified programming language by researchers conducting  massive simulations powered by CPUs and GPUs. The center said it chose the oneAPI programming model, designed to span CPUs, GPUs, FPGAs and other architectures and silicon,  to accelerate compute for research using GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics software, developed by SeRC and first released in 1991

‘Rocky Year’ – Hyperion’s HPC Market Update: COVID-19 Hits Q1 Revenues, Cloud HPC Boom, Shift in Server Vendor Standings

Instead of its usual mid-year HPC market update presented at the ISC conference in Frankfurt, industry analyst firm Hyperion Research has virtually released its latest findings – including estimates of COVID-19 ‘s impact on the industry, on growth of HPC in public clouds and a significant shift in the competitive standing among the leading HPC server vendors. Taking 2019 in total, Hyperion sized the HPC server market at $13.7 billion, record revenues

NetApp Deploys Iguazio’s Data Science Platform for Optimized Storage Management

Previously built on Hadoop, NetApp said it was also looking to modernize the service infrastructure “to reduce the complexities of deploying new AI services and the costs of running large-scale analytics. In addition, the shift was needed to enable real-time predictive AI, and to abstract deployment, allowing the technology to run on multi-cloud or on premises seamlessly.”

Podcast: A Shift to Modern C++ Programming Models

In this Code Together podcast, Alice Chan from Intel and Hal Finkel from Argonne National Lab discuss how the industry is uniting to address the need for programming portability and performance across diverse architectures, particularly important with the rise of data-intensive workloads like artificial intelligence and machine learning. “We discuss the important shift to modern C++ programming models, and how the cross-industry oneAPI initiative, and DPC++, bring much-needed portable performance to today’s developers.”

How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented.”