“PushToCompute is the easiest and most advanced DevOps pipeline for high performance applications available today”, said Nimbix CTO Leo Reiter. “It seamlessly enables serverless computing of even the most complex workflows, greatly simplifying application deployment at scale, and eliminating the need for any platform orchestration or user interface work. Developers simply focus on their specific functionality, rather than on building cloud capabilities into their applications.”
In this special guest feature from Scientific Computing World, Wolfgang Gentzsch explains the role of HPC container technology in providing ubiquitous access to HPC. “The advent of lightweight pervasive, packageable, portable, scalable, interactive, easy to access and use HPC application containers based on Docker technology running seamlessly on workstations, servers, and clouds, is bringing us ever closer to the democratization of HPC.”
Today Bright Computing released Version 7.3 of Bright Cluster Manager and Bright OpenStack. With enhanced support for containers, the new release has enhanced integration with Amazon Web Services (AWS), improvements to the interface with the Ceph distributed object store and file system, and a variety of other updates that make deployment and configuration easier and more intuitive.
Today the FlyElephant team announced the release of the FlyElephant 2.0 platform for High Performance Computing. Versioin 2.0 enhancements include: internal expert community, collaboration on projects, public tasks, Docker and Jupyter support, a new file storage system and work with HPC clusters.
Today Univa announced the general availably of its Grid Engine 8.4.0 product. Enterprises can now automatically dispatch and run jobs in Docker containers, from a user specified Docker image, on a Univa Grid Engine cluster. This significant update simplifies running complex applications in a Grid Engine cluster and reduces configuration and OS issues. Grid Engine 8.4.0 isolates user applications into their own container, avoiding conflict with other jobs on the system and enables legacy applications in Docker containers and non-container applications to run in the same cluster.
In this TACC podcast, Joe Stubbs from the Texas Advanced Computing Centter describes potential benefits to scientists of open container platform Docker in supporting reproducibility, NSF-funded Agave API. “As more scientists share not only their results but their data and code, Docker is helping them reproduce the computational analysis behind the results. What’s more, Docker is one of the main tools used in the Agave API platform, a platform-as-a-service solution for hybrid cloud computing developed at TACC and funded in part by the National Science Foundation.”
“Research computational workflows consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This presentation will introduce our experience deploying genomic pipelines with Docker containers at the Center for Genomic Regulation (CRG). I will discuss how we implemented it, the main issues we faced, the pros and cons of using Docker in an HPC environment including a benchmark of the impact of containers technology on the performance of the executed applications.”
In this special guest feature from Scientific Computing World, Dr Bruno Silva from The Francis Crick Institute in London writes that new cloud technologies will make the cloud even more important to scientific computing. “The emergence of public cloud and the ability to cloud-burst is actually the real game-changer. Because of its ‘infinite’ amount of resources (effectively always under-utilized), it allows for a clear decoupling of time-to-science from efficiency. One can be somewhat less efficient in a controlled fashion (higher cost, slightly more waste) to minimize time-to-science when required (in burst, so to speak) by effectively growing the computing estate available beyond the fixed footprint of local infrastructure – this is often referred to as the hybrid cloud model. You get both the benefit of efficient infrastructure use, and the ability to go beyond that when strictly required.”
“With Docker v1.9 a new networking system was introduced, which allows multi-host network- ing to work out-of-the-box in any Docker environment. This talk provides an introduction on what Docker networking provides, followed by a demo that spins up a full SLURM cluster across multiple machines. The demo is based on QNIBTerminal, a Consul backed set of Docker Images to spin up a broad set of software stacks.”
“UberCloud specializes in running HPC workloads on a broad spectrum of infrastructures, anywhere from national centers to public Cloud services. This session will be review of the learnings of UberCloud Experiments performed by industry end users. The live demonstration will cover how to achieve peak simulation performance and usability in the Cloud and national centers, using fast interconnects, new generation CPU’s, SSD drives and UberCloud technology based on Linux containers.”