Registration Opens for Stanford HPC Conference Virtual Event

Registration is now open for the Stanford HPC Conference. The two day ‘condensed’ agenda combines thought leadership and practical insights on HPC, AI, Data Science and much more. The virtual event takes place April 21-22. “The Stanford High Performance Computing Center in collaboration with the HPC-AI Advisory Council invite you to join the annual Stanford Conference as an entirely virtual experience.”

Stanford Student Program Gives Supercomputers a Second Life

A novel program at Stanford is finding a second life for used HPC clusters, providing much-needed computational resources for research while giving undergraduate students a chance to learn valuable career skills. To learn more, we caught up with Dellarontay Readus from the Stanford High Performance Computing Center (HPCC).

Video: A Fast, Scaleable HPC Engine for Data Ingest

David Wade from Integral Engineering gave this talk at the Stanford HPC Conference. “In this talk, a design is sketched for an engine to ingest data from the IOT massively into a cluster for analysis, storage and transformation using COTS methods from High Performance Computing techniques in hardware and software.”

Scalable Machine Learning: The Role of Stratified Data Sharding

Srinivasan Parthasarathy from Ohio State University gave this talk at the Stanford HPC Conference. “With the increasing popularity of structured data stores, social networks and Web 2.0 and 3.0 applications, complex data formats, such as trees and graphs, are becoming ubiquitous. I will discuss a critical element at the heart of this challenge relates to the sharding, placement, storage and access of such tera- and peta- scale data.”

Accelerating Machine Learning on VMware vSphere with NVIDIA GPUs

Mohan Potheri from VMware gave this talk at Stanford HPC Conference. “This session introduces machine learning on vSphere to the attendee and explains when and why GPUs are important for them. Basic machine learning with Apache Spark is demonstrated. GPUs can be effectively shared in vSphere environments and the various methods of sharing are addressed here.”

How to Design Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the Stanford HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models taking into account support for multi-core systems (Xeon, OpenPower, and ARM), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness.”

Spack – A Package Manager for HPC

Todd Gamblin from LLNL gave this talk at the Stanford HPC Conference. “Spack is a package manager for cluster users, developers and administrators. Rapidly gaining popularity in the HPC community, like other HPC package managers, Spack was designed to build packages from source. This talk will introduce some of the open infrastructure for distributing packages, challenges to providing binaries for a large package ecosystem and what we’re doing to address problems.”

Pioneering and Democratizing Scalable HPC+AI at the Pittsburgh Supercomputing Center

Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. “To address the demand for scalable AI, PSC recently introduced Bridges-AI, which adds transformative new AI capability. In this presentation, we share our vision in designing HPC+AI systems at PSC and highlight some of the exciting research breakthroughs they are enabling.”

Video: Sierra – Science Unleashed

Rob Neely from LLNL gave this talk at the Stanford HPC Conference. “This talk will give an overview of the Sierra supercomputer and some of the early science results it has enabled. Sierra is an IBM system harnessing the power of over 17,000 NVIDIA Volta GPUs recently deployed at Lawrence Livermore National Laboratory and is currently ranked as the #2 system on the Top500. Before being turned over for use in the classified mission, Sierra spent months in an “open science campaign” where we got an early glimpse at some of the truly game-changing science this system will unleash – selected results of which will be presented.”

Innovative Use of HPC in the Cloud for AI, CFD, & LifeScience

Ebru Taylak from the UberCloud gave this talk at the Stanford HPC Conference. “A scalable platform such as the cloud, provides more accurate results, while reducing solution times. In this presentation, we will demonstrate recent examples of innovative use cases of HPC in the Cloud, such as, “Personalized Non-invasive Clinical Treatment of Schizophrenia and Parkinson’s” and “Deep Learning for Steady State Fluid Flow Prediction”. We will explore the challenges for the specific problems, demonstrate how HPC in the Cloud helped overcome these challenges, look at the benefits, and share the learnings.”