Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Containers: Shifter and Singularity on Blue Waters

In this video from the Blue Waters 2018 Symposium, Maxim Belkin presents a tutorial on Containers: Shifter and Singularity on Blue Waters. “Container solutions are a great way to seamlessly execute code on a variety of platforms. Not only they are used to abstract away from the software stack of the underlying operating system, they also enable reproducible computational research. In this mini-tutorial, I will review the process of working with Shifter and Singularity on Blue Waters.”

NVIDIA Simplifies Building Containers for HPC Applications

In this video, CJ Newburn from NVIDIA describes how users can benefit from running their workloads in the NVIDIA GPU Cloud. “A container essentially creates a self contained environment. Your application lives in that container along with everything the application depends on, so the whole bundle is self contained. NVIDIA is now offering a script as part of an open source project called HPC Container Maker, or HPCCM that makes it easy for developers to select the ingredients they want to go into a container, to provide those ingredients in an optimized way using best-known recipes.”

Containers Using Singularity on HPC

Abhinav Thota, from Indiana University gave this talk at the 2018 Swiss HPC Conference. “Container use is becoming more widespread in the HPC field. There are various reasons for this, including the broadening of the user base and applications of HPC. One of the popular container tools on HPC is Singularity, an open source project coming out of the Berkeley Lab. In this talk, we will introduce Singularity, discuss how users of Indiana University are using it and share our experience supporting it. This talk will include a brief demonstration as well.”

Why UIUC Built HPC Application Containers for NVIDIA GPU Cloud

In this video from the GPU Technology Conference, John Stone from the University of Illinois describes how container technology in the NVIDIA GPU Cloud help the University distribute accelerated applications for science and engineering. “Containers are a way of packaging up an application and all of its dependencies in such a way that you can install them collectively on a cloud instance or a workstation or a compute node. And it doesn’t require the typical amount of system administration skills and involvement to put one of these containers on a machine.”

NVIDIA Makes GPU Computing Easier in the Cloud

Setting up an environment for High Performance Computing (HPC) especially using GPUs can be daunting. There can be multiple dependencies, a number of supporting libraries required, and complex installation instructions. NVIDIA has made this easier with the announcement and release of HPC Application Containers with the NVIDIA GPU Cloud.

Video: State of Containers and the Convergence of HPC and BigData

Christian Kniep from Docker Inc gave this talk at the 2018 Swiss HPC Conference. “This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale. In conclusion attendees will get an update on how containers foster the convergence of Big Data and HPC workloads and the state of native HPC containers.”

New Mellanox Onyx Ethernet Network Operating System boosts Devops

Today Mellanox announced the release of Mellanox Onyx – the industry-leading open and flexible Ethernet Network Operating System for Mellanox Spectrum Open Ethernet switches. “Mellanox Onyx offers a mature Layer-3 feature-set, with integrated support for standard Devops tools, allowing customers to run third party containerized applications with complete SDK access. By utilizing Mellanox Onyx’s leading capabilities, our customers can enjoy the benefits of an industry-standard Layer-2 and Layer3 feature-set along with the ability to customize and optimize the network to their specific needs.”

Video: The Marriage of Cloud, HPC and Containers

Adam Huffman from the Francis Crick Institute gave this talk at FOSDEM’17. “We will present experiences of supporting HPC/HTC workloads on private cloud resources, with ideas for how to do this better and description of trends for non-traditionalHPC resource provision. I will discuss my work as part of the Operations Team for the eMedLab private cloud, which is a large-scale (6000-core, 5PB)biomedical research cloud using HPC hardware, aiming to support HPC workloads.”

Building Containers for Intel Omni-Path Fabrics using Docker and Singularity

The Intel® OPA technology is designed to leverage the existing Linux* RDMA kernel and networking stack interfaces. As such, many HPC applications designed to run on RDMA networks can run unmodified on compute nodes with Intel® OPA network technology installed, benefitting from improved network performance. When these HPC applications are run in containers, using techniques described in this application note, these same Linux* RDMA kernel device and networking stack interfaces can be selectively exposed to the containerized applications, enabling them to take advantage of the improved network performance of the Intel® OPA technology.

RCE Podcast Looks at Shifter Containers for HPC

In this RCE Podcast, Brock Palen and Jeff Squyres speak with Shane Canon and Doug Jacobsen from NERSC, the authors of Shifter. “Shifter is a prototype implementation that NERSC is developing and experimenting with as a scalable way of deploying containers in an HPC environment. It works by converting user or staff generated images in Docker, Virtual Machines, or CHOS (another method for delivering flexible environments) to a common format.”