Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


New InfiniBand Architecture Specifications Extend Virtualization Support

“As performance demands continue to evolve in both HPC and enterprise cloud applications, the IBTA saw an increasing need for new enhancements to InfiniBand’s network capabilities, support features and overall interoperability,” said Bill Magro, co-chair of the IBTA Technical Working Group. “Our two new InfiniBand Architecture Specification updates satisfy these demands by delivering interoperability and testing upgrades for EDR and FDR, flexible management capabilities for optimal low-latency and low-power functionality and virtualization support for better network scalability.”

Speakers Announced for Dell HPC Community Meeting at SC16

The Dell HPC Community at SC16 has posted their Meeting Agenda. “Blair Bethwaite from Monash University will present OpenStack for HPC at Monash. After that, Josh Simons from VMWare will describe the latest technologies in HPC virtualization.” The event takes place Saturday, Nov. 12 at the Radisson Hotel in Salt Lake City.

Slidecast: Running HPC Simulation Workflows in Microsoft Azure

In this video from the Microsoft Ignite Conference, Tejas Karmarkar describes how to run your HPC Simulations on Microsoft Azure – with UberCloud container technology. “High performance computing applications are some of the most challenging to run in the cloud due to requirements that can include fast processors, low-latency networking, parallel file systems, GPUs, and Linux. We show you how to run these engineering, research and scientific workloads in Microsoft Azure with performance equivalent to on-premises. We use customer case studies to illustrate the basic architecture and alternatives to help you get started with HPC in Azure.”

Streamlining HPC Workloads with Containers

“While we often talk about the density advantages of containers, it’s the opposite approach that we use in the High Performance Computing world! Here, we use exactly 1 system container per node, giving it unlimited access to all of the host’s CPU, Memory, Disk, IO, and Network. And yet we can still leverage the management characteristics of containers — security, snapshots, live migration, and instant deployment to recycle each node in between jobs. In this talk, we’ll examine a reference architecture and some best practices around containers in HPC environments.”

Nimbix Cloud Adds Docker Integration to JARVICE

“PushToCompute is the easiest and most advanced DevOps pipeline for high performance applications available today”, said Nimbix CTO Leo Reiter. “It seamlessly enables serverless computing of even the most complex workflows, greatly simplifying application deployment at scale, and eliminating the need for any platform orchestration or user interface work. Developers simply focus on their specific functionality, rather than on building cloud capabilities into their applications.”

Putting HPC into the Hands of Every Engineer and Scientist

In this special guest feature from Scientific Computing World, Wolfgang Gentzsch explains the role of HPC container technology in providing ubiquitous access to HPC. “The advent of lightweight pervasive, packageable, portable, scalable, interactive, easy to access and use HPC application containers based on Docker technology running seamlessly on workstations, servers, and clouds, is bringing us ever closer to the democratization of HPC.”

Video: User Managed Virtual Clusters in Comet

Rick Wagner from SDSC presented this talk at the the 4th Annual MVAPICH User Group. “At SDSC, we have created a novel framework and infrastructure by providing virtual HPC clusters to projects using the NSF sponsored Comet supercomputer. Managing virtual clusters on Comet is similar to managing a bare-metal cluster in terms of processes and tools that are employed. This is beneficial because such processes and tools are familiar to cluster administrators.”

Podcast: Using Docker for Science at TACC

In this TACC podcast, Joe Stubbs from the Texas Advanced Computing Centter describes potential benefits to scientists of open container platform Docker in supporting reproducibility, NSF-funded Agave API. “As more scientists share not only their results but their data and code, Docker is helping them reproduce the computational analysis behind the results. What’s more, Docker is one of the main tools used in the Agave API platform, a platform-as-a-service solution for hybrid cloud computing developed at TACC and funded in part by the National Science Foundation.”

Manage Reproducibility of Computational Workflows with Docker Containers and Nextflow

“Research computational workflows consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This presentation will introduce our experience deploying genomic pipelines with Docker containers at the Center for Genomic Regulation (CRG). I will discuss how we implemented it, the main issues we faced, the pros and cons of using Docker in an HPC environment including a benchmark of the impact of containers technology on the performance of the executed applications.”

Reducing the Time to Science with Efficient Clouds

In this special guest feature from Scientific Computing World, Dr Bruno Silva from The Francis Crick Institute in London writes that new cloud technologies will make the cloud even more important to scientific computing. “The emergence of public cloud and the ability to cloud-burst is actually the real game-changer. Because of its ‘infinite’ amount of resources (effectively always under-utilized), it allows for a clear decoupling of time-to-science from efficiency. One can be somewhat less efficient in a controlled fashion (higher cost, slightly more waste) to minimize time-to-science when required (in burst, so to speak) by effectively growing the computing estate available beyond the fixed footprint of local infrastructure – this is often referred to as the hybrid cloud model. You get both the benefit of efficient infrastructure use, and the ability to go beyond that when strictly required.”