Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Deep Learning & HPC: New Challenges for Large Scale Computing

“In recent years, major breakthroughs were achieved in different fields using deep learning. From image segmentation, speech recognition or self-driving cars, deep learning is everywhere. Performance of image classification, segmentation, localization have reached levels not seen before thanks to GPUs and large scale GPU-based deployments, leading deep learning to be a first class HPC workload.”

Video: Computing of the Future

Jeffrey Welser from IBM Research Almaden presented this talk at the Stanford HPC Conference. “Whether exploring new technical capabilities, collaborating on ethical practices or applying Watson technology to cancer research, financial decision-making, oil exploration or educational toys, IBM Research is shaping the future of AI.”

Huawei: A Fresh Look at High Performance Computing

Francis Lam from Huawei presented this talk at the Stanford HPC Conference. “High performance computing is rapidly finding new uses in many applications and businesses, enabling the creation of disruptive products and services. Huawei, a global leader in information and communication technologies, brings a broad spectrum of innovative solutions to HPC. This talk examines Huawei’s world class HPC solutions and explores creative new ways to solve HPC problems.

Designing HPC & Deep Learning Middleware for Exascale Systems

DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. “This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”

Video: State of Linux Containers

“Linux Containers gain more and more momentum in all IT ecosystems. This talk provides an overview about what happened in the container landscape (in particular Docker) during the course of the last year and how it impacts datacenter operations, HPC and High-Performance Big Data. Furthermore Christian will give an update/extend on the ‘things to explore’ list he presented in the last Lugano workshop, applying what he learned and came across during the year 2016.”

Shahin Khan Presents: Hot Technology Topics in 2017

Shahin Khan from OrionX presented this talk at the Stanford HPC Conference. “From BitCoins and AltCoins to Design Thinking, Autonomous tech and the changing nature of jobs, IoT and cyber risk, and the impact of application architecture on cloud computing, we’ll touch on some of the hottest technologies in 2017 that are changing the world and how HPC will be the engine that drives it.”

Best Practices – Large Scale Multiphysics

Frank Ham from Cascade Technologies presented this talk at the Stanford HPC Conference. “A spin-off of the Center for Turbulence Research at Stanford University, Cascade Technologies grew out of a need to bridge between fundamental research from institutions like Stanford University and its application in industries. In a continual push to improve the operability and performance of combustion devices, high-fidelity simulation methods for turbulent combustion are emerging as critical elements in the design process. Multiphysics based methodologies can accurately predict mixing, study flame structure and stability, and even predict product and pollutant concentrations at design and off-design conditions.”

Tutorial: Towards Exascale Computing with Fortran 2015

“This tutorial will present several features that the draft Fortran 2015 standard introduces to meet challenges that are expected to dominate massively parallel programming in the coming exascale era. The expected exascale challenges include higher hardware- and software-failure rates, increasing hardware heterogeneity, a proliferation of execution units, and deeper memory hierarchies.”

Video: Trish Damkroger on her New Mission at Intel

In this video from KAUST Live: Patricia Damkroger discusses her new role as Vice President, Data Center Group and General Manager, Technical Computing Initiative, Enterprise and Government at Intel. “As the former Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL), Trish Damkroger lead the 1,000-employee workforce behind the Laboratory’s high performance computing efforts. She is a longtime committee member and one-time general chair of the SC conference. Most recently, Damkroger was the SC16 Diverse HPC Workforce Chair.”

Panel Discussion: The Exascale Endeavor

Gilad Shainer moderated this panel discussion on Exascale Computing at the Stanford HPC Conference. “The creation of a capable exascale ecosystem will have profound effects on the lives of Americans, improving our nation’s national security, economic competitiveness, and scientific capabilities. The exponential increase of computation power enabled with exascale will fuel a vast range of breakthroughs and accelerate discoveries in national security, medicine, earth sciences and many other fields.”