Ensuring a reproducible result of HPC workloads no matter the platform used is key to settle the battle between operations (update fast, due to compliance / security) and end- user (never touch a running system). By pushing his docker related HPC research further, Christian is showcasing his results on Immutable Application Containers.
In this slidecast, Christian Kniep presents: QNIBTerminal Plus InfiniBand – Containerized MPI Workloads. “QNIB Solutions (early on called ‘QNIB Inc’) derives from the first project Christian did during his B.Sc. report, a InfiniBand monitoring suite. For the sake of the report is was named ‘OpenIBPM: Open Source InfiniBand Performance Monitoring’. Afterwards Christian renamed it to match his last name (Kniep): ‘QNIB: Qualified Networking with InfiniBand’. Since then QNIB becomes a pet project’s theme.”
Both large-scale environments and scale-out workloads (such as Big Data) are becoming more important in the enterprise. In fact, with the rise of Big Data, the advent of affordable, powerful clusters, and strategies that take advantage of commodity systems for scale-out applications, these days the enterprise computing environment is looking a lot like HPC.
“One important recent technological development might have the power to change the world of HPC cloud: UberCloud Containers. The UberCloud started in mid-2013 using an open platform, called Docker, that can package an application and its dependencies in a virtual container that runs on any modern Linux server. The UberCloud enhanced Docker to suit it for technical computing applications in science and engineering.”
The software defined data center is the underlying data center architecture that allows most IT infrastructure to be defined in software and to function as enterprise-wide resources. This approach enables ITaaS to be delivered in a virtualized environment with greater agility, speed and quality of service.
“The HPC community has had a long-standing interest in creating scale-out environments for running throughput-oriented and parallel distributed workloads. Both large-scale environments (for example, cloud computing facilities) and scale-out workloads (such as Big Data) are becoming more important in the enterprise. In fact, with the rise of Big Data, the advent of affordable, powerful clusters, and strategies that take advantage of commodity systems for scale-out applications, these days the enterprise computing environment is looking a lot like HPC.”
Virtualization allows workloads to be compartmentalized in their own VM in order to take full advantage of the underlying parallelism of today’s multicore, heterogeneous HPC systems without compromising security. This approach is particularly beneficial for organizations centralizing multiple groups on to a shared cluster or for teams with security issues – for example, a life sciences environment where access to genomic data needs to be restricted to specific researchers.