Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: A Fast, Scaleable HPC Engine for Data Ingest

David Wade from Integral Engineering gave this talk at the Stanford HPC Conference. “In this talk, a design is sketched for an engine to ingest data from the IOT massively into a cluster for analysis, storage and transformation using COTS methods from High Performance Computing techniques in hardware and software.”

Scalable Machine Learning: The Role of Stratified Data Sharding

Srinivasan Parthasarathy from Ohio State University gave this talk at the Stanford HPC Conference. “With the increasing popularity of structured data stores, social networks and Web 2.0 and 3.0 applications, complex data formats, such as trees and graphs, are becoming ubiquitous. I will discuss a critical element at the heart of this challenge relates to the sharding, placement, storage and access of such tera- and peta- scale data.”

Accelerating Machine Learning on VMware vSphere with NVIDIA GPUs

Mohan Potheri from VMware gave this talk at Stanford HPC Conference. “This session introduces machine learning on vSphere to the attendee and explains when and why GPUs are important for them. Basic machine learning with Apache Spark is demonstrated. GPUs can be effectively shared in vSphere environments and the various methods of sharing are addressed here.”

How to Design Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models taking into account support for multi-core systems (Xeon, OpenPower, and ARM), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness.”

Spack – A Package Manager for HPC

Todd Gamblin from LLNL gave this talk at the Stanford HPC Conference. “Spack is a package manager for cluster users, developers and administrators. Rapidly gaining popularity in the HPC community, like other HPC package managers, Spack was designed to build packages from source. This talk will introduce some of the open infrastructure for distributing packages, challenges to providing binaries for a large package ecosystem and what we’re doing to address problems.”

Pioneering and Democratizing Scalable HPC+AI at the Pittsburgh Supercomputing Center

Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. “To address the demand for scalable AI, PSC recently introduced Bridges-AI, which adds transformative new AI capability. In this presentation, we share our vision in designing HPC+AI systems at PSC and highlight some of the exciting research breakthroughs they are enabling.”

Video: Sierra – Science Unleashed

Rob Neely from LLNL gave this talk at the Stanford HPC Conference. “This talk will give an overview of the Sierra supercomputer and some of the early science results it has enabled. Sierra is an IBM system harnessing the power of over 17,000 NVIDIA Volta GPUs recently deployed at Lawrence Livermore National Laboratory and is currently ranked as the #2 system on the Top500. Before being turned over for use in the classified mission, Sierra spent months in an “open science campaign” where we got an early glimpse at some of the truly game-changing science this system will unleash – selected results of which will be presented.”

Innovative Use of HPC in the Cloud for AI, CFD, & LifeScience

Ebru Taylak from the UberCloud gave this talk at the Stanford HPC Conference. “A scalable platform such as the cloud, provides more accurate results, while reducing solution times. In this presentation, we will demonstrate recent examples of innovative use cases of HPC in the Cloud, such as, “Personalized Non-invasive Clinical Treatment of Schizophrenia and Parkinson’s” and “Deep Learning for Steady State Fluid Flow Prediction”. We will explore the challenges for the specific problems, demonstrate how HPC in the Cloud helped overcome these challenges, look at the benefits, and share the learnings.”

Architecting the Right System for Your AI Application—without the Vendor Fluff

Brett Newman from Microway gave this talk at the Stanford HPC Conference. “Figuring out how to map your dataset or algorithm to the optimal hardware design is one of the hardest tasks in HPC. We’ll review what helps steer the selection of one system architecture from another for AI applications. Plus the right questions to ask of your collaborators—and a hardware vendor. Honest technical advice, no fluff.”

ASTRA: A Large Scale ARM64 HPC Deployment

Michael Aguilar from Sandia National Laboratories gave this talk at the Stanford HPC Conference. “This talk will discuss the Sandia National Laboratories Astra HPC system as mechanism for developing and evaluating large-scale deployments of alternative and advanced computational architectures. As part of the Vanguard program, the new Arm-based system will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.”