Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Comet Supercomputer Doubles Down on Nvidia Tesla P100 GPUs

The San Diego Supercomputer Center has been granted a supplemental award from the National Science Foundation to double the number of GPUs on its petascale-level Comet supercomputer. “This expansion is reflective of a wider adoption of GPUs throughout the scientific community, which is being driven in large part by the availability of community-developed applications that have been ported to and optimized for GPUs,” said SDSC Director Michael Norman, who is also the principal investigator for the Comet program.

One Stop Systems Announces SkyScale HPC as a Service

Today One Stop Systems (OSS) annouced the launch of SkyScale, a new company that provides HPC as a Service (HPCaaS). For years OSS has been designing and manufacturing the latest in high performance computing and storage systems. Now customers can lease time on these same systems, saving time and money. OSS systems are the distinguishing factor for SkyScale’s HPCaaS offering. OSS has been the first company to successfully produce a system that can operate sixteen of the latest NVIDIA Tesla GPU accelerators connected to a single server. These systems are employed today in deep learning applications and in a variety of industries including defense and oil and gas.

HPC Workflows Using Containers

“In this talk we will discuss a workflow for building and testing Docker containers and their deployment on an HPC system using Shifter. Docker is widely used by developers as a powerful tool for standardizing the packaging of applications across multiple environments, which greatly eases the porting efforts. On the other hand, Shifter provides a container runtime that has been specifically built to fit the needs of HPC. We will briefly introduce these tools while discussing the advantages of using these technologies to fulfill the needs of specific workflows for HPC, e.g., security, high-performance, portability and parallel scalability.”

Deep Learning on the SaturnV Cluster

“The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters.”

Tesla P100 GPUs Speed Cloud-Based Deep Learning on the Nimbix Cloud

“Nimbix has tremendous experience in GPU cloud computing, going all the way back to NVIDIA’s Fermi architecture,” said Steve Hebert, CEO of Nimbix. “We are looking forward to accelerating deep learning and analytics applications for customers seeking the latest generation GPU technology available in a public cloud.”

NVIDIA Tesla P100 GPU Speeds AI Workloads in the IBM Cloud

Today IBM announced that it is the first major cloud provider to make the Nvidia Tesla P100 GPU accelerator available globally on the cloud. “As the AI era takes hold, demand continues to surge for our GPU-accelerated computing platform in the cloud,” said Ian Buck, general manager, Accelerated Computing, NVIDIA. “These new IBM Cloud offerings will provide users with near-instant access to the most powerful GPU technologies to date – enabling them to create applications to address complex problems that were once unsolvable.”

Dell Powers New Owens Cluster at Ohio State

Today the Ohio Supercomputer Center dedicated its newest, most powerful supercomputer: the Owens Cluster. The Dell cluster, named for the iconic Olympic champion Jesse Owens, delivers 1.5 petaflops of total peak performance. “OSC’s Owens Cluster represents one of the most significant HPC systems Dell has built,” said Tony Parkinson, Vice President for NA Enterprise Solutions and Alliances at Dell.

Radio Free HPC Looks at Azure’s Move to GPUs and OCP for Deep Learning

In this podcast, the Radio Free HPC team looks at a set of IT and Science stories. Microsoft Azure is making a big move to GPUs and the OCP Platform as part of their Project Olympus. Meanwhile, Huawei is gaining market share in the server market and IBM is bringing storage to the atomic level.

E4 Computer Engineering’s Showcases New Petascale OCP Platform

Today E4 Computer Engineering from Italy showcased a new PetaFlops-Class Open Compute Server with “remarkable energy efficiency” based on the IBM POWER architecture. “Finding new ways of making easily deployable and energy efficient HPC solutions is often a complex task, which requires a lot of planning, testing and benchmarking – said Cosimo Gianfreda CTO, Co-Founder, E4 Computer Engineering. – We are very lucky to work with great partners like Wistron, as their timing and accuracy means we have all the right conditions to have effective time-to-market. I strongly believe that the performance on the node, coupled with the power monitoring technology, will receive a wide acceptance from the HPC and Enterprise community.”

Introduction to GPUs in HPC

“This video is from the opening session of the “Introduction to Programming Pascal (P100) with CUDA 8″ workshop at CSCS in Lugano, Switzerland. The three-day course is intended to offer an introduction to Pascal computing using CUDA 8.”