Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

The Pending Age of Exascale

In this special guest feature from Scientific Computing World, Robert Roe looks at advances in exascale computing and the impact of AI on HPC development. “There is a lot of co-development, AI and HPC are not mutually exclusive. They both need high-speed interconnects and very fast storage. It just so happens that AI functions better on GPUs. HPC has GPUs in abundance, so they mix very well.”

Software-Defined Visualization with Intel Rendering Framework – No Special Hardware Needed

This sponsored post from Intel explores how the Intel Rendering Framework, which brings together a number of optimized, open source rendering libraries, can deliver better performance at a higher degree of fidelity — without having to invest in extra hardware. By letting the CPU do the work, visualization applications can run anywhere without specialized hardware, and users are seeing better performance than they could get from dedicated graphics hardware and limited memory. 

AMD to Power Exascale Cray System at ORNL

Today AMD announced a new exascale-class supercomputer to be delivered to ORNL in 2021. Built by Cray, the “Frontier” system is expected to deliver more than 1.5 exaFLOPS of processing performance on AMD CPU and GPU processors to accelerate advanced research programs addressing the most complex compute problems. “The combination of a flexible compute infrastructure, scalable HPC and AI software, and the intelligent Slingshot system interconnect will enable Cray customers to undertake a new age of science, discovery and innovation at any scale.”

‘AI on the Fly’: Moving AI Compute and Storage to the Data Source

The impact of AI is just starting to be realized across a broad spectrum of industries. Tim Miller, Vice President Strategic Development at One Stop Systems (OSS), highlights a new approach — ‘AI on the Fly’ — where specialized high-performance accelerated computing resources for deep learning training move to the field near the data source. Moving AI computation to the data is another important step in realizing the full potential of AI.

Intel Addresses the Convergence of AI, Analytic, and Traditional HPC Workloads

HPC is no longer just HPC, but rather a mix of workloads that instantiate the convergence of AI, traditional HPC modeling and simulation, and HPDA (High Performance Data Analytics). Exit the traditional HPC center that just runs modeling and simulation and enter the world that must support the convergence of HPC-AI-HPDA computing, and sometimes with specialized hardware. In this sponsored post, Intel explores how HPC is becoming “more than just HPC.”

Making HPC Cloud a Reality in the Federal Space

Martin Rieger from Penguin Computing gave this talk at the HPC User Forum. “Built on a secure, high-performance bare-metal server platform with supercomputing-grade, non-blocking InfiniBand interconnects infrastructure, Penguin on Demand can handle the most challenging simulation and analytics. But, because of access via the cloud (from either a traditional Linux command line interface (CLI) or a secure web portal) you get both instant accesses and extreme scalability — without having to invest in on-premise infrastructure or the associated operational costs.”

Exascale Computing Project Software Activities

Mike Heroux from Sandia National Labs gave this talk at the HPC User Forum. “The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security.The goal of the ECP Software Technology focus area is to develop a comprehensive and coherent software stack that will enable application developers to productively write highly parallel applications that can portably target diverse exascale architectures.”

Video: Prepare for Production AI with the HPE AI Data Node

In this video from GTC 2019 in San Jose, Harvey Skinner, Distinguished Technologist, discusses the advent of production AI and how the HPE AI Data Node offers a building block for AI storage. “The HPE AI Data Node is a HPE reference configuration which offers a storage solution that provides both the capacity for data, as well as a performance tier that meets the throughput requirements of GPU servers. The HPE Apollo 4200 Gen10 density optimized data server provides the hardware platform for the WekaIO Matrix flash-optimized parallel file system, as well as the Scality RING object store.”

2019: The Year of PCI Express 4.0

Computer systems are about to get a whole lot faster. This year starting at the high end of the market a transition will begin toward systems based on PCI Express 4.0. The interconnect speed will double to 64GB/sec in a 16 lane connection. Tim Miller, Vice President Strategic Development for One Stop Systems, explores the expected speed and innovation stemming from the introduction of PCI Express 4.0. 

NVIDIA to Purchase Mellanox for $6.9 Billion

Today NVIDIA announced plans to acquire Mellanox for approximately $6.9 billion. The acquisition will unite two of the world’s leading companies in HPC. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker.