Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Quantum Computing at NIST

Carl Williams from NIST gave this talk at the HPC User Forum in Tucson. “Quantum information science research at NIST explores ways to employ phenomena exclusive to the quantum world to measure, encode and process information for useful purposes, from powerful data encryption to computers that could solve problems intractable with classical computers.”

Containers Using Singularity on HPC

Abhinav Thota, from Indiana University gave this talk at the 2018 Swiss HPC Conference. “Container use is becoming more widespread in the HPC field. There are various reasons for this, including the broadening of the user base and applications of HPC. One of the popular container tools on HPC is Singularity, an open source project coming out of the Berkeley Lab. In this talk, we will introduce Singularity, discuss how users of Indiana University are using it and share our experience supporting it. This talk will include a brief demonstration as well.”

Using the Titan Supercomputer to Develop 50,000 Years of Flood Risk Scenarios

Dag Lohmann from KatRisk gave this talk at the HPC User Forum in Tucson. “In 2012, a small Berkeley, California, startup called KatRisk set out to improve the quality of worldwide flood risk maps. The team wanted to create large-scale, high-resolution maps to help insurance companies evaluate flood risk on the scale of city blocks and buildings, something that had never been done. Through the OLCF’s industrial partnership program, KatRisk received 5 million processor hours on Titan.”

Shifter – Docker Containers for HPC

Alberto Madonaa gave this talk at the Swiss HPC Conference. “In this work we present an extension to the container runtime of Shifter that provides containerized applications with a mechanism to access GPU accelerators and specialized networking from the host system, effectively enabling performance portability of containers across HPC resources. The presented extension makes possible to rapidly deploy high-performance software on supercomputers from containerized applications that have been developed, built, and tested in non-HPC commodity hardware, e.g. the laptop or workstation of a researcher.”

Iceland’s Verne Global Steps up to run HPC & AI Workloads in the Cloud

In this video from the GPU Technology Conference, Bob Fletcher from Verne Global discusses why more and more HPC & AI workloads are moving to the company’s datacenters in Iceland. “Today’s computational environments are changing rapidly as more companies are looking to utilize HPC and intensive applications across an increasingly wide variety of industries. At Verne Global we have fully optimized our campus to meet the specific requirements of the international HPC community.”

Video: Demystifying Parallel and Distributed Deep Learning

Torsten Hoefler from (ETH) Zürich gave this talk at the 2018 Swiss HPC Conference. “Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this talk, we describe the problem from a theoretical perspective, followed by approaches for its parallelization.”

Inside the new NVIDIA DGX-2 Supercomputer with NVSwitch

In this video from the GPU Technology Conference, Marc Hamilton from NVIDIA describes the new DGX-2 supercomputer with the NVSwitch interconnect. “NVIDIA NVSwitch is the first on-node switch architecture to support 16 fully-connected GPUs in a single server node and drive simultaneous communication between all eight GPU pairs at an incredible 300 GB/s each. These 16 GPUs can be used as a single large-scale accelerator with 0.5 Terabytes of unified memory space and 2 petaFLOPS of deep learning compute power. With NVSwitch, we have 2.4 terabytes a second bisection bandwidth, 24 times what you would have with two DGX-1s.”

RAID No More: GPUs Power NSULATE for Extreme HPC Data Protection

In this video from GTC 2018, Alexander St . John from Nyriad demonstrates how the company’s NSULATE software running on Advanced HPC gear provides extreme data protection for HPC data. As we watch, he removes a dozen SSDs from a live filesystem — and it keeps on running!

New HP Z8 is “World’s Most Powerful Workstation for Machine Learning Development”

HP Z Workstations, with new NVIDIA technology, are ideal for local processing at the edge of the network – giving developers more control, better performance and added security over cloud-based solutions. “Products like the HP Z8, the most powerful workstation for ML development, coupled with the new NVIDIA Quadro GV100, the HP ML Developer Portal and our expanded services offerings will undoubtedly fast-track the adoption of machine learning.”

Video: VMware powers HPC Virtualization at NVIDIA GPU Technology Conference

In this video from from 2018 GPU Technology Conference, Ziv Kalmanovich from VMware and Fred Devoir from NVIDIA describe how they are working together to bring the benefits of virtualization to GPU workloads. “For cloud environments based on vSphere, you can deploy a machine learning workload yourself using GPUs via the VMware DirectPath I/O or vGPU technology.”