Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Slidecast: Running HPC Simulation Workflows in Microsoft Azure

In this video from the Microsoft Ignite Conference, Tejas Karmarkar describes how to run your HPC Simulations on Microsoft Azure – with UberCloud container technology. “High performance computing applications are some of the most challenging to run in the cloud due to requirements that can include fast processors, low-latency networking, parallel file systems, GPUs, and Linux. We show you how to run these engineering, research and scientific workloads in Microsoft Azure with performance equivalent to on-premises. We use customer case studies to illustrate the basic architecture and alternatives to help you get started with HPC in Azure.”

Streamlining HPC Workloads with Containers

“While we often talk about the density advantages of containers, it’s the opposite approach that we use in the High Performance Computing world! Here, we use exactly 1 system container per node, giving it unlimited access to all of the host’s CPU, Memory, Disk, IO, and Network. And yet we can still leverage the management characteristics of containers — security, snapshots, live migration, and instant deployment to recycle each node in between jobs. In this talk, we’ll examine a reference architecture and some best practices around containers in HPC environments.”

Nimbix Cloud Adds Docker Integration to JARVICE

“PushToCompute is the easiest and most advanced DevOps pipeline for high performance applications available today”, said Nimbix CTO Leo Reiter. “It seamlessly enables serverless computing of even the most complex workflows, greatly simplifying application deployment at scale, and eliminating the need for any platform orchestration or user interface work. Developers simply focus on their specific functionality, rather than on building cloud capabilities into their applications.”

Putting HPC into the Hands of Every Engineer and Scientist

In this special guest feature from Scientific Computing World, Wolfgang Gentzsch explains the role of HPC container technology in providing ubiquitous access to HPC. “The advent of lightweight pervasive, packageable, portable, scalable, interactive, easy to access and use HPC application containers based on Docker technology running seamlessly on workstations, servers, and clouds, is bringing us ever closer to the democratization of HPC.”

Video: User Managed Virtual Clusters in Comet

Rick Wagner from SDSC presented this talk at the the 4th Annual MVAPICH User Group. “At SDSC, we have created a novel framework and infrastructure by providing virtual HPC clusters to projects using the NSF sponsored Comet supercomputer. Managing virtual clusters on Comet is similar to managing a bare-metal cluster in terms of processes and tools that are employed. This is beneficial because such processes and tools are familiar to cluster administrators.”

Podcast: Using Docker for Science at TACC

In this TACC podcast, Joe Stubbs from the Texas Advanced Computing Centter describes potential benefits to scientists of open container platform Docker in supporting reproducibility, NSF-funded Agave API. “As more scientists share not only their results but their data and code, Docker is helping them reproduce the computational analysis behind the results. What’s more, Docker is one of the main tools used in the Agave API platform, a platform-as-a-service solution for hybrid cloud computing developed at TACC and funded in part by the National Science Foundation.”

Manage Reproducibility of Computational Workflows with Docker Containers and Nextflow

“Research computational workflows consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This presentation will introduce our experience deploying genomic pipelines with Docker containers at the Center for Genomic Regulation (CRG). I will discuss how we implemented it, the main issues we faced, the pros and cons of using Docker in an HPC environment including a benchmark of the impact of containers technology on the performance of the executed applications.”

Reducing the Time to Science with Efficient Clouds

In this special guest feature from Scientific Computing World, Dr Bruno Silva from The Francis Crick Institute in London writes that new cloud technologies will make the cloud even more important to scientific computing. “The emergence of public cloud and the ability to cloud-burst is actually the real game-changer. Because of its ‘infinite’ amount of resources (effectively always under-utilized), it allows for a clear decoupling of time-to-science from efficiency. One can be somewhat less efficient in a controlled fashion (higher cost, slightly more waste) to minimize time-to-science when required (in burst, so to speak) by effectively growing the computing estate available beyond the fixed footprint of local infrastructure – this is often referred to as the hybrid cloud model. You get both the benefit of efficient infrastructure use, and the ability to go beyond that when strictly required.”

Video: Shifter – Containers in HPC environments

“Containers wrap up software with all its dependencies in packages that can be executed anywhere. This can be specially useful in HPC environments where, often, getting the right combination of software tools to build applications is a daunting task. However, typical container solutions such as Docker are not a perfect fit for HPC environments. Instead, Shifter is a better fit as it has been built from the ground up with HPC in mind. In this talk, we show you what Shifter is and how to leverage from the current Docker environment to run your ap- plications with Shifter.”

Video: The State of Linux Containers

“With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks.”