Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: The Human Side of AI

 In this video from the GPU Technology Conference, Dan Olds from OrionX discusses the human impact of AI with Greg Schmidt from HPE. The industry buzz about artificial intelligence and deep learning typically focuses on hardware, software, frameworks,  performance, and the lofty business plans that will be enabled by this new technology. What we don’t […]

Video: Prepare for Production AI with the HPE AI Data Node

In this video from GTC 2019 in San Jose, Harvey Skinner, Distinguished Technologist, discusses the advent of production AI and how the HPE AI Data Node offers a building block for AI storage. “The HPE AI Data Node is a HPE reference configuration which offers a storage solution that provides both the capacity for data, as well as a performance tier that meets the throughput requirements of GPU servers. The HPE Apollo 4200 Gen10 density optimized data server provides the hardware platform for the WekaIO Matrix flash-optimized parallel file system, as well as the Scality RING object store.”

AI Critical Measures: Time to Value and Insights

AI is a game changer for industries today but achieving AI success contains two critical factors to consider — time to value and time to insights.  Time to value is the metric that looks at the time it takes to realize the value of a product, solution or offering. Time to insight is a key measure for how long it takes to gain value from use of the product, solution or offering.

Video: Arm in HPC

Brent Gorda from ARM gave this talk at the Rice Oil & Gas Conference. “With the recent Astra system at Sandia Lab (#203 on the Top500) and HPE Catalyst project in the UK, Arm-based architectures are arriving in HPC environments. Several partners have announced or will soon announce new silicon and projects, each of which offers something different and compelling for our community.  Brent will describe the driving factors and how these solutions are changing the landscape for HPC.”

Big Compute Podcast Looks at New Architectures for HPC

In this Big Compute podcast, host Gabriel Broner from Rescale interviews Mike Woodacre, HPE Fellow, to discuss the shift from CPUs to an emerging diversity of architectures. They discuss the evolution of CPUs, the advent of GPUs with increasing data parallelism, memory driven computing, and the potential benefits of a cloud environment with access to multiple architectures.

ASTRA: A Large Scale ARM64 HPC Deployment

Michael Aguilar from Sandia National Laboratories gave this talk at the Stanford HPC Conference. “This talk will discuss the Sandia National Laboratories Astra HPC system as mechanism for developing and evaluating large-scale deployments of alternative and advanced computational architectures. As part of the Vanguard program, the new Arm-based system will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.”

Podcast: What is an Ai Supercomputer?

In this podcast, the Radio Free HPC team asks whether a supercomputer can or cannot be a “AI Supercomputer.” The question came up after HPE announced a new AI system called Jean Zay that will double the capacity of French supercomputing. “So what are the differences between a traditional super and a AI super? According to Dan, it mostly comes down to how many GPUs the system is configured with, while Shahin and Henry think it has something to do with the datasets.”

NVIDIA CEO Jensen Huang to Keynote World’s Premier AI Conference

NVIDIA founder and CEO Jensen Huang will deliver the opening keynote address at the 10th annual GPU Technology Conference, being held March 17-21, in San Jose, Calif. “If you’re interested in AI, there’s no better place in the world to connect to a broad spectrum of developers and decision makers than GTC Silicon Valley,” said Greg Estes, vice president of developer programs at NVIDIA. “This event has grown tenfold in 10 years for a reason — it’s where experts from academia, Fortune 500 enterprises and the public sector share their latest work furthering AI and other advanced technologies.”

HPE Scalable Storage for Lustre: The Middle Way

Lustre is a widely-used parallel file system in the High Performance Computing (HPC) market. It offers the performance required for HPC workloads, with its parallel design, flexibility, and scalability. This sponsored post explores HPE scalable storage and the Lustre parallel file system, and outlines ‘a middle ground’ available via solutions that offer the combination of Community Lustre within a qualified hardware solution.

Choice Comes to HPC: A Year in Processor Development

In this special guest feature, Robert Roe from Scientific Computing World writes that a whole new set of processor choices could shake up high performance computing. “While Intel is undoubtedly the king of the hill when it comes to HPC processors – with more than 90 per cent of the Top500 using Intel-based technologies – the advances made by other companies, such as AMD, the re-introduction of IBM and the maturing Arm ecosystem are all factors that mean that Intel faces stiffer competition than it has for a decade.”