MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Performing Simulation-Based, Real-time Decision Making with Cloud HPC

Zach Smocha from Rescale presented this talk at the HPC User Forum. “Manor Racing is partnering with San Francisco based Rescale as a key technology provider for its 2016 FIA Formula 1 World Championship challenge. Manor Racing will use Rescale’s cloud high performance computing (HPC) platform to enable trackside simulation on a whole new scale for the team. Working in tandem with Manor Racing’s existing race strategy simulation software, the Rescale cloud HPC platform will enable its engineers to evaluate thousands of simulations and strategies, placing the team to be at the cutting edge of innovative decision making during a Grand Prix weekend. The whole process is executed from a laptop web browser by Rescale’s massively scalable cloud infrastructure and computer environment.”

Video: HPC Trends from the Trenches at Bio-IT World

In this video, Chris Dagdigian from Bioteam delivers his annual assessment of the best, the worthwhile, and the most overhyped information technologies for life sciences at the 2016 Bio-IT World Conference & Expo in Boston. “The presentation tries to recap the prior year by discussing what has changed (or not) around infrastructure, storage, computing, and networks. This presentation will help scientists, leadership and IT professionals understand the basic topics involved in supporting data intensive science.”

Video: Intel’s Machine Learning Strategy

In this video from the HPC User Forum in Tucson, Gary Paek from Intel presents: Intel’s Machine Learning Strategy. “Earlier this week, Intel announced the inception of the Intel Data Analytics Acceleration Library (Intel DAAL) open source project. Intel DAAL helps to speed up big data analysis by providing highly optimized algorithmic building blocks for all stages of data analytics (preprocessing, transformation, analysis, modeling, validation, and decision making) in batch, online, and distributed processing modes of computation.”

Video: Cloud for the “Missing Middle”

Leo Reiter from Nimbix presented this deck at the HPC User Forum. “Nimbix is a pure high performance computing cloud built for volume, speed and simplicity. We give people the tools and the processing power to solve their biggest, toughest problems. We give you the freedom to imagine new possibilities, to test the limits of reality, and to model the future. For most workloads, Nimbix is far less expensive than building, running and maintaining your own supercomputer. It’s also more efficient at spinning up, executing, completing the job and delivering your results — which saves you time and money. And our user-friendly platform means you invest less in development and infrastructure.”

Hewlett Packard Enterprise Packs 8 GPUs into Apollo 6500 Server

In this video from the 2016 GPU Technology Conference, Greg Schmidt from Hewlett Packard Enterprise describes the new Apollo 6500 server. “With up to eight high performance NVIDIA GPU cards designed for maximum transfer bandwidth, the HPE Apollo 6500 System is purpose-built for deep learning applications. Its high ratio of GPUs to CPUs, dense 4U form factor and efficient design enable organizations to run deep learning recommendation algorithms faster and more efficiently, significantly reducing model training time and accelerating the delivery of real-time results, all while controlling costs.”

How HPE Makes GPUs Easier to Program for Data Scientists

In this video from the 2016 GPU Technology Conference, Rich Friedrich from Hewlett Packard Enterprise describes how the company makes it easier for Data Scientists to program GPUs. “In April, HPE announced a public, open-source version of the platform called the Cognitive Computing Toolkit. Instead of relying on the traditional CPUs that power most computers, the Toolkit runs on graphics processing units (GPUs), inexpensive chips designed for video game applications.”

Video: Lustre Community Release Update

Peter Jones from Intel presented this talk at LUG 2016 in Portland. “The OpenSFS Lustre Working Group (LWG) is the place the where the participants of OpenSFS come together to coordinate their software development efforts for the Lustre high-performance, Open Source, parallel filesystem. This includes planning and the roadmap for community releases of Lustre.”

Video: AMD ROC – Radeon Open Compute Platform

Gregory Stoner from AMD presented this talk at the HPC User Forum. “With the announcement of the Boltzmann Initiative and the recent releases of ROCK and ROCR, AMD has ushered in a new era of Heterogeneous Computing. The Boltzmann initiative exposes cutting edge compute capabilities and features on targeted AMD/ATI Radeon discrete GPUs through an open source software stack. The Boltzmann stack is comprised of several components based on open standards, but extended so important hardware capabilities are not hidden by the implementation.”

Video: Europe’s Fastest Supercomputer and the World Around It

Michael Resch from HLRS gave this rousing talk at the HPC User Forum. “HLRS supports national and European researchers from science and industry by providing high-performance computing platforms and technologies, services and support. Supercomputer Hazel Hen, a Cray XC40-system, is at the heart of the HPC system infrastructure of the HLRS. With a peak performance of 7.42 Petaflops (quadrillion floating point operations per second), Hazel Hen is one of the most powerful HPC systems in the world (position 8 of TOP500, 11/2015) and is the fastest supercomputer in the European Union. The HLRS supercomputer, which was taken into operation in October 2015, is based on the Intel Haswell Processor and the Cray Aries network and is designed for sustained application performance and high scalability.”

Slidecast: Advantages of Offloading Architectures for HPC

In this slidecast, Gilad Shainer from Mellanox describes the advantages of InfiniBand and the company’s off-loading network architecture for HPC. “The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”