Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


For the first time, all TOP500 Systems are Petaflop Machines

The latest TOP500 list of the world’s fastest supercomputers is out today, marking a major milestone in the 26-year history of the list. For the first time, all 500 systems deliver a petaflop or more on the Linpack benchmark. “Frontera at TACC is the only new supercomputer in the top 10, which attained its number five ranking by delivering 23.5 petaflops on HPL. The Dell C6420 system, powered by Intel Xeon Platinum 8280 processors.”

The Convergence of HPC and AI Workloads Requires Flexibility and Performance

Finding the best solution to meet the requirements for intertwined HPC and AI workloads requires us tolook at the overall platform benefits versus the benefits of individual technologies. With exascale on the horizon, the blending of HPC and AI algorithms, and ever-increasing data sets, having an overall robust platform is more important than ever. Intel makes the case for HPC and AI to share a common platform. 

The Pending Age of Exascale

In this special guest feature from Scientific Computing World, Robert Roe looks at advances in exascale computing and the impact of AI on HPC development. “There is a lot of co-development, AI and HPC are not mutually exclusive. They both need high-speed interconnects and very fast storage. It just so happens that AI functions better on GPUs. HPC has GPUs in abundance, so they mix very well.”

Software-Defined Visualization with Intel Rendering Framework – No Special Hardware Needed

This sponsored post from Intel explores how the Intel Rendering Framework, which brings together a number of optimized, open source rendering libraries, can deliver better performance at a higher degree of fidelity — without having to invest in extra hardware. By letting the CPU do the work, visualization applications can run anywhere without specialized hardware, and users are seeing better performance than they could get from dedicated graphics hardware and limited memory. 

AMD to Power Exascale Cray System at ORNL

Today AMD announced a new exascale-class supercomputer to be delivered to ORNL in 2021. Built by Cray, the “Frontier” system is expected to deliver more than 1.5 exaFLOPS of processing performance on AMD CPU and GPU processors to accelerate advanced research programs addressing the most complex compute problems. “The combination of a flexible compute infrastructure, scalable HPC and AI software, and the intelligent Slingshot system interconnect will enable Cray customers to undertake a new age of science, discovery and innovation at any scale.”

‘AI on the Fly’: Moving AI Compute and Storage to the Data Source

The impact of AI is just starting to be realized across a broad spectrum of industries. Tim Miller, Vice President Strategic Development at One Stop Systems (OSS), highlights a new approach — ‘AI on the Fly’ — where specialized high-performance accelerated computing resources for deep learning training move to the field near the data source. Moving AI computation to the data is another important step in realizing the full potential of AI.

Intel Addresses the Convergence of AI, Analytic, and Traditional HPC Workloads

HPC is no longer just HPC, but rather a mix of workloads that instantiate the convergence of AI, traditional HPC modeling and simulation, and HPDA (High Performance Data Analytics). Exit the traditional HPC center that just runs modeling and simulation and enter the world that must support the convergence of HPC-AI-HPDA computing, and sometimes with specialized hardware. In this sponsored post, Intel explores how HPC is becoming “more than just HPC.”

Making HPC Cloud a Reality in the Federal Space

Martin Rieger from Penguin Computing gave this talk at the HPC User Forum. “Built on a secure, high-performance bare-metal server platform with supercomputing-grade, non-blocking InfiniBand interconnects infrastructure, Penguin on Demand can handle the most challenging simulation and analytics. But, because of access via the cloud (from either a traditional Linux command line interface (CLI) or a secure web portal) you get both instant accesses and extreme scalability — without having to invest in on-premise infrastructure or the associated operational costs.”

Exascale Computing Project Software Activities

Mike Heroux from Sandia National Labs gave this talk at the HPC User Forum. “The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security.The goal of the ECP Software Technology focus area is to develop a comprehensive and coherent software stack that will enable application developers to productively write highly parallel applications that can portably target diverse exascale architectures.”

Video: Prepare for Production AI with the HPE AI Data Node

In this video from GTC 2019 in San Jose, Harvey Skinner, Distinguished Technologist, discusses the advent of production AI and how the HPE AI Data Node offers a building block for AI storage. “The HPE AI Data Node is a HPE reference configuration which offers a storage solution that provides both the capacity for data, as well as a performance tier that meets the throughput requirements of GPU servers. The HPE Apollo 4200 Gen10 density optimized data server provides the hardware platform for the WekaIO Matrix flash-optimized parallel file system, as well as the Scality RING object store.”