First Experiences with Parallel Application Development in Fortran 2018

Damian Rouson from the Sourcery Institute gave this talk at the Stanford HPC Conference. “This talk will present performance and scalability results of the mini-app running on several platforms using up to 98,000 cores. A second application involves the use of teams of images (processes) that execute indecently for ensembles of computational hydrology simulations using WRF-Hyrdro, the hydrological component of the Weather Research Forecasting model also developed at NCAR. Early experiences with portability and programmability of Fortran 2018 will also be discussed.”

Linac Coherent Laser Source (LCLS-II): Data Transfer Requirements

Les Cottrell from SLAC gave this talk at the Stanford HPC Conference. “Scientists use LCLS to take crisp pictures of atomic motions, watch chemical reactions unfold, probe the properties of materials and explore fundamental processes in living things. The talk will introduce LCLS and LCLS-II with a short video, discuss its data reduction, collection, data transfer needs and current progress in meeting these needs.”

Outlook on Hot Technologies

Shahin Khan from OrionX gave this talk at the Stanford HPC Conference. “We will review OrionX’s predictions for 2018, the technologies that are changing the world (Iot, Blockchain, Quantum Computing, AI …) and how HPC will be the engine that drives it.”

Living Heart Project: Using HPC in the Cloud to Save Lives

Burak Yenier and Francisco Sahli gave this talk at the Stanford HPC Conference. “Cardiac arrhythmia can be a potentially lethal side effect of medications. Before a new drug reaches the market, pharmaceutical companies need to check for the risk of inducing arrhythmias. Currently, this process takes years and involves costly animal and human studies. In this project, the Living Matter Laboratory of Stanford University developed a new software tool enabling drug developers to quickly assess the viability of a new compound. During this session we will look at how High Performance Computing in the Cloud is being used to prevent severe side effects and save lives.”

Sharing High-Performance Interconnects Across Multiple Virtual Machines

Mohan Potheri from VMware gave this talk at the Stanford HPC Conference. “Virtualized devices offer maximum flexibility. This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations.”

High Availability HPC: Microservice Architectures for Supercomputing

Ryan Quick from Providentia Worldwide gave this talk at the Stanford HPC Conference. “Microservices power cloud-native applications to scale thousands of times larger than single deployments. We introduce the notion of microservices for traditional HPC workloads. We will describe microservices generally, highlighting some of the more popular and large-scale applications. Then we examine similarities between large-scale cloud configurations and HPC environments. Finally we propose a microservice application for solving a traditional HPC problem, illustrating improved time-to-market and workload resiliency.”

SpaRC: Scalable Sequence Clustering using Apache Spark

Zhong Wang from the Genome Institute at LBNL gave this talk at the Stanford HPC Conference. “Whole genome shotgun based next generation transcriptomics and metagenomics studies often generate 100 to 1000 gigabytes (GB) sequence data derived from tens of thousands of different genes or microbial species. Here we describe an Apache Spark-based scalable sequence clustering application, SparkReadClust (SpaRC) that partitions reads based on their molecule of origin to enable downstream assembly optimization.”

Video: HPC Computing Trends

Chris Willard from Intersect360 Research gave this talk at the Stanford HPC Conference. “Intersect360 Research returns with an annual deep dive into the trends, technologies and usage models that will be propelling the HPC community through 2018 and beyond. Emerging areas of focus and opportunities to expand will be explored along with insightful observations needed to support measurably positive decision making within your operations.”

State of Linux Containers

Christian Kniep from Docker Inc. gave this talk at the Stanford HPC Conference. “This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward, Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale.”

Highest Peformance and Scalability for HPC and AI

Scot Schultz from Mellanox gave this talk at the Stanford HPC Conference. “Today, many agree that the next wave of disruptive technology blurring the lines between the digital, physical and even the biological, will be the fourth industrial revolution of AI. The fusion of state-of-the-art computational capabilities, extensive automation and extreme connectivity is already affecting nearly every aspect of society, driving global economics and extending into every aspect of our daily life.”