Architecting the Right System for Your AI Application—without the Vendor Fluff

Brett Newman from Microway gave this talk at the Stanford HPC Conference. “Figuring out how to map your dataset or algorithm to the optimal hardware design is one of the hardest tasks in HPC. We’ll review what helps steer the selection of one system architecture from another for AI applications. Plus the right questions to ask of your collaborators—and a hardware vendor. Honest technical advice, no fluff.”

ASTRA: A Large Scale ARM64 HPC Deployment

Michael Aguilar from Sandia National Laboratories gave this talk at the Stanford HPC Conference. “This talk will discuss the Sandia National Laboratories Astra HPC system as mechanism for developing and evaluating large-scale deployments of alternative and advanced computational architectures. As part of the Vanguard program, the new Arm-based system will be used by the National Nuclear Security Administration (NNSA) to run advanced modeling and simulation workloads for addressing areas such as national security, energy and science.”

The New HPC

Addison Snell gave this talk at the Stanford HPC Conference. “Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2017 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations.”

Singularity: Container Workflows for Compute

Greg Kurtzer from Sylabs gave this talk at the Stanford HPC Conference. “Singularity is a widely adopted container technology specifically designed for compute-based workflows making application and environment reproducibility, portability and security a reality for HPC and AI researchers and resources. Here we will describe a high-level overview of Singularity and demonstrate how to integrate Singularity containers into existing application and resource workflows as well as describe some new trending models that we have been seeing.”

Video: Container Mythbusters

Michael Jennings from LANL gave this talk at the Stanford HPC Conference. “As containers initially grew to prominence within the greater Linux community, particularly in the hyperscale/cloud and web application space, there was very little information out there about using Linux containers for HPC at all. In this session, we’ll confront this problem head-on by clearing up some common misconceptions about containers, bust some myths born out of misunderstanding and marketing hype alike, and learn how to safely (and securely!) navigate the Linux container landscape with an eye toward what the future holds for containers in HPC and how we can all get there together!”

Video: Introduction to Intel Optane Data Center Persistent Memory

In this video from the 2019 Stanford HPC Conference, Usha Upadhyayula & Tom Krueger from Intel present: Introduction to Intel Optane Data Center Persistent Memory. For decades, developers had to balance data in memory for performance with data in storage for persistence. The emergence of data-intensive applications in various market segments is stretching the existing […]

Call for Participation: Stanford HPC Conference in February

Today the HPC-AI Advisory Council announced their Call for Sponsors and Session Proposals for the annual Stanford Conference. The event takes place February 14-15, 2019. “Breakthrough discoveries, research, new technologies and innovation all rely on each other,” said Gilad Shainer, HPC-AI Advisory Council chairman. “AI and HPC domains are dominating continuous change and the Stanford Conference is one of few forums where attendees can rise above the din of hype and learn about all of the above, all in one place and openly share best practices that are key to furthering their efforts.”

Porting Scientific Research Codes to GPUs with CUDA Fortran

Josh Romero from NVIDIA gave this talk at the Stanford HPC Conference. “In this session, we intend to provide guidance and techniques for porting scientific research codes to NVIDIA GPUs using CUDA Fortran. The GPU porting effort of an incompressible fluid dynamics solver using the immersed boundary method will be described. Several examples from this program will be used to illustrate available features in CUDA Fortran, from simple directive-based programming using CUF kernels to lower level programming using CUDA kernels.”

Accelerating HPC Applications on NVIDIA GPUs with OpenACC

Doug Miles from NVIDIA gave this talk at the Stanford HPC Conference. “This talk will include an introduction to the OpenACC programming model, provide examples of its use in a number of production applications, explain how OpenACC and CUDA Unified Memory working together can dramatically simplify GPU programming, and close with a few thoughts on OpenACC future directions.”

Deploy Serverless TensorFlow Models using Kubernetes, OpenFaaS, GPUs and PipelineAI

Chris Fregly from PipelineAI gave this talk at the Stanford HPC Conference. “Applying my Netflix experience to a real-world problem in the ML and AI world, I will demonstrate a full-featured, open-source, end-to-end TensorFlow Model Training and Deployment System using the latest advancements with TensorFlow, Kubernetes, OpenFaaS, GPUs, and PipelineAI.”