Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Porting Scientific Research Codes to GPUs with CUDA Fortran

Josh Romero from NVIDIA gave this talk at the Stanford HPC Conference. “In this session, we intend to provide guidance and techniques for porting scientific research codes to NVIDIA GPUs using CUDA Fortran. The GPU porting effort of an incompressible fluid dynamics solver using the immersed boundary method will be described. Several examples from this program will be used to illustrate available features in CUDA Fortran, from simple directive-based programming using CUF kernels to lower level programming using CUDA kernels.”

Accelerating HPC Applications on NVIDIA GPUs with OpenACC

Doug Miles from NVIDIA gave this talk at the Stanford HPC Conference. “This talk will include an introduction to the OpenACC programming model, provide examples of its use in a number of production applications, explain how OpenACC and CUDA Unified Memory working together can dramatically simplify GPU programming, and close with a few thoughts on OpenACC future directions.”

Deploy Serverless TensorFlow Models using Kubernetes, OpenFaaS, GPUs and PipelineAI

Chris Fregly from PipelineAI gave this talk at the Stanford HPC Conference. “Applying my Netflix experience to a real-world problem in the ML and AI world, I will demonstrate a full-featured, open-source, end-to-end TensorFlow Model Training and Deployment System using the latest advancements with TensorFlow, Kubernetes, OpenFaaS, GPUs, and PipelineAI.”

First Experiences with Parallel Application Development in Fortran 2018

Damian Rouson from the Sourcery Institute gave this talk at the Stanford HPC Conference. “This talk will present performance and scalability results of the mini-app running on several platforms using up to 98,000 cores. A second application involves the use of teams of images (processes) that execute indecently for ensembles of computational hydrology simulations using WRF-Hyrdro, the hydrological component of the Weather Research Forecasting model also developed at NCAR. Early experiences with portability and programmability of Fortran 2018 will also be discussed.”

Linac Coherent Laser Source (LCLS-II): Data Transfer Requirements

Les Cottrell from SLAC gave this talk at the Stanford HPC Conference. “Scientists use LCLS to take crisp pictures of atomic motions, watch chemical reactions unfold, probe the properties of materials and explore fundamental processes in living things. The talk will introduce LCLS and LCLS-II with a short video, discuss its data reduction, collection, data transfer needs and current progress in meeting these needs.”

Outlook on Hot Technologies

Shahin Khan from OrionX gave this talk at the Stanford HPC Conference. “We will review OrionX’s predictions for 2018, the technologies that are changing the world (Iot, Blockchain, Quantum Computing, AI …) and how HPC will be the engine that drives it.”

Living Heart Project: Using HPC in the Cloud to Save Lives

Burak Yenier and Francisco Sahli gave this talk at the Stanford HPC Conference. “Cardiac arrhythmia can be a potentially lethal side effect of medications. Before a new drug reaches the market, pharmaceutical companies need to check for the risk of inducing arrhythmias. Currently, this process takes years and involves costly animal and human studies. In this project, the Living Matter Laboratory of Stanford University developed a new software tool enabling drug developers to quickly assess the viability of a new compound. During this session we will look at how High Performance Computing in the Cloud is being used to prevent severe side effects and save lives.”

Sharing High-Performance Interconnects Across Multiple Virtual Machines

Mohan Potheri from VMware gave this talk at the Stanford HPC Conference. “Virtualized devices offer maximum flexibility. This session introduces SR-IOV, explains how it is enabled in VMware vSphere, and provides details of specific use cases that important for machine learning and high-performance computing. It includes performance comparisons that demonstrate the benefits of SR-IOV and information on how to configure and tune these configurations.”

High Availability HPC: Microservice Architectures for Supercomputing

Ryan Quick from Providentia Worldwide gave this talk at the Stanford HPC Conference. “Microservices power cloud-native applications to scale thousands of times larger than single deployments. We introduce the notion of microservices for traditional HPC workloads. We will describe microservices generally, highlighting some of the more popular and large-scale applications. Then we examine similarities between large-scale cloud configurations and HPC environments. Finally we propose a microservice application for solving a traditional HPC problem, illustrating improved time-to-market and workload resiliency.”

SpaRC: Scalable Sequence Clustering using Apache Spark

Zhong Wang from the Genome Institute at LBNL gave this talk at the Stanford HPC Conference. “Whole genome shotgun based next generation transcriptomics and metagenomics studies often generate 100 to 1000 gigabytes (GB) sequence data derived from tens of thousands of different genes or microbial species. Here we describe an Apache Spark-based scalable sequence clustering application, SparkReadClust (SpaRC) that partitions reads based on their molecule of origin to enable downstream assembly optimization.”