Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Baidu Releases Fast Allreduce Library for Deep Learning

In this video, from Andrew Gibiansky from Baidu describes baidu-allreduce, a newly released C library that enables faster training of neural network models across many GPUs. The library demonstrates the allreduce algorithm, which you can embed into any MPI-enabled application. 

NeSI in New Zealand Installs Pair of Cray Supercomputers

The New Zealand Science Infrastructure (NeSI) is commissioning a new HPC system that will be colocated at two facilities. “The new systems, provide a step change in power to NeSI’s existing services, including a Cray XC50 Supercomputer and a Cray CS400 cluster High Performance Computer, both sharing the same high performance and offline storage systems.”

Video: Deep Learning on Azure with GPUs

In this video, you’ll learn how to start submitting deep neural network (DNN) training jobs in Azure by using Azure Batch to schedule the jobs to your GPU compute clusters. “Previously, few people had access to the computing power for these scenarios. With Azure Batch, that power is available to you when you need it.”

ORNL Readies Facility for 200 Petaflop Summit Supercomputer

Oak Ridge National Laboratory is moving equipment into a new high-performance computing center this month which is anticipated to become one of the world’s premier resources for open science computing. “There were a lot considerations to be had when designing the facilities for Summit,” explained George Wellborn, Heery Project Architect. “We are essentially harnessing a small city’s worth of power into one room. We had to ensure the confined space was adaptable for the power and cooling that is needed to run this next generation supercomputer.”

Purdue Adds New Resource for GPU-accelerated Research Computing

A new computing resource is available for Purdue researchers running applications that can take advantage of GPU accelerators. The system, known as Halstead-GPU, is a newly GPU-equipped portion of Halstead, Purdue’s newest community cluster research supercomputer. Halstead-GPU nodes consist of two 10-core Intel Xeon E5 CPUs per node, 256 GB of RAM, EDR Infiniband interconnects and two NVIDIA Tesla P100 GPUs. The GPU nodes have the same high-speed scratch storage as the main Halstead cluster.

Dell EMC Supercomputer to Power OzGRav Studies of Black Holes

Today Dell EMC announced it will build a supercomputer to power Swinburne University of Technology’s groundbreaking research into astrophysics and gravitational waves. “We will be looking for gravitational waves that help us learn more about supernovas, the formation of stars, intergalactic gases and more,” said Professor Bailes. “It’s exciting to think that we as OzGRav could make the next landmark discovery in gravitational wave astrophysics – and the Dell EMC supercomputer will allow us to capture, visualise and process the data to make those discoveries.”

DeepSat: Monitoring the Earth’s Vitals with AI

In order to better keep a finger on the pulse of the Earth’s health, NASA developed DeepSat, a deep learning AI framework for satellite image classification and segmentation. DeepSat provides vital signs of changing landscapes at the highest possible resolution, enabling scientists to use the data for independent modeling efforts.

One Stop Systems Rolls Out 4U Value Expansion System

“The Value Expansion System is ideal for customers on a tight budget who need high-density PCIe expansion. Customers can utilize the 4UV for GPUs, flash or a combination of both, providing performance gains in many applications like deep learning, oil and gas exploration, financial calculations, and video rendering.”

HPC Analyst Crossfire at ISC 2017

In this video from ISC 2017, Addison Snell from Intersect360 Research fires back at industry leaders with hard-hitting questions about the state of the HPC industry. “Listen in as visionary leaders from the supercomputing community comment on forward-looking trends that will shape the industry this year and beyond.”

IBM Scales TensorFlow and Caffe to 256 GPUs

Over at IBM, Sumit Gupta writes that the company has enabled record-breaking image recognition capabilities that make Deep Learning much more practical at scale. “The bottom line is that the record IBM broke slashes Deep Learning training time from days to hours, which will enable customers to more easily address larger technical challenges significantly faster.”