In this video from the Docker Workshop at ISC 2015, Christian Kniep from QNIB Solutions shows how he uses Docker in his efforts to provide a HPC software stack in a box, encapsulating each layer in the HPC stack within a Linux Container.
“Since Hurricane Katrina made landfall in 20015, storm prediction technology has seen dramatic forward movement, from improved software to better use of observations and increased computing power – all aimed at giving emergency decision makers more time and specifics to help protect lives and property. The expert panelists in this Congressional Briefing outline research advances that have led to better forecasting of hurricane and tropical storm weather and impacts. And they spotlight research directions that hold promise for future improvements.”
“When Professor Ross Walker explains what he does for a living, he says he’s on the cutting edge of drug discovery research using supercomputers. Today, he and his team build supercomputer molecular biology software thanks in part to a partnership with Intel’s Software Academic Program. The program provides tools and resources to help Walker’s molecular dynamics lab develop highly effective supercomputer simulations.”
“As a result of a new alliance with Intel, HP is offering its HPC Solutions Framework based on HP Apollo servers, which are specialized for HPC and now optimized to support industry- specific software applications from leading independent software vendors. These solutions will dramatically simplify the deployment of HPC for customers in industries such as oil and gas, life sciences and financial services. The HP Apollo product line integrates Intel’s technology innovation from its HPC scalable system framework, which helps to extend the resilience, reliability, power efficiency and price/performance of the HP Apollo solutions.”
In this video from the Velocity 2015 conference, Brendan Gregg from Netflix presents a 90 minute tutorial on Linux performance tools. “There are many performance tools nowadays for Linux, but how do they all fit together, and when do we use them? I’ve spoken on this topic before, but given a 90 minute time slot I was able to include more methodologies, tools, and live demonstrations, making it the most complete tour of the topic I’ve done.”
“IBM Platform Data Manager for LSF takes control of data transfers to help organizations improve data throughput and lower costs by minimizing wasted compute cycles and conserving disk space. Platform Data Manager automates the transfer of data used by application workloads running on IBM Platform LSF clusters and the cloud, bringing frequently used data closer to compute resources by storing it in a smart, managed cache that can be shared among users and workloads.”
“The framework established between Seagate and Micron has led to the development and production of this next-generation, high-capacity SAS SSD platform. It is the first 12 gigabits-per-second (Gb/s) SAS device to optimize dual channel throughput with up to 1800 megabytes-per-second (MB/s) sequential reads and offer multiple endurance choices within a single hardware and firmware design. These advantages engineered for the 1200.2 SAS SSD ensure it will deliver ultra-fast performance to match the needs of specific enterprise applications and workloads.”