Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Intel FPGAs Accelerate Microsoft’s Project Brainwave

Today, Intel announced that its AI technology is being used by Microsoft to power their new accelerated deep learning platform, called Project Brainwave. “Project Brainwave achieves a major leap forward in both performance and flexibility for cloud-based serving of deep learning models. We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency. Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.”

Video: Deep Learning on Azure with GPUs

In this video, you’ll learn how to start submitting deep neural network (DNN) training jobs in Azure by using Azure Batch to schedule the jobs to your GPU compute clusters. “Previously, few people had access to the computing power for these scenarios. With Azure Batch, that power is available to you when you need it.”

Microsoft Acquires Cycle Computing

Today Microsoft announced it has acquired Cycle Computing, a software company focused on making cloud computing resources more readily available for HPC workloads. “Now supporting InfiniBand and accelerated GPU computing, Microsoft Azure looks to be a perfect home for Cycle Computing, which started its journey with software for aggregating compute resources at AWS. The company later added similar capabilities for Azure and Google Cloud.”

How Intel FPGAs Power Azure Deep Learning

Microsoft Azure CTO Mark Russinovich recently disclosed major advances in Microsoft’s hyperscale deployment of Intel field programmable gate arrays (FPGAs). These advances have resulted in the industry’s fastest public cloud network, and new technology for acceleration of Deep Neural Networks (DNNs) that replicate “thinking” in a manner that’s conceptually similar to that of the human brain.

Rock Stars of HPC: Karan Batta

From software developer at a small start-up in New Zealand, to Senior Program Manager at one of the largest multinational technology companies in the US, Karan Batta has led a career touched by HPC – even if he didn’t always realize it at the time. As the driving force behind the GPU Infrastructure vision, roadmap and deployment in Microsoft Azure, Karan Batta is a Rock Star of HPC.

Rambus Collaborates with Microsoft on Cryogenic Memory

“With the increasing challenges in conventional approaches to improving memory capacity and power efficiency, our early research indicates that a significant change in the operating temperature of DRAM using cryogenic techniques may become essential in future memory systems,” said Dr. Gary Bronner, vice president of Rambus Labs. “Our strategic partnership with Microsoft has enabled us to identify new architectural models as we strive to develop systems utilizing cryogenic memory. The expansion of this collaboration will lead to new applications in high-performance supercomputers and quantum computers.”

Overview of the HGX-1 AI Accelerator Chassis

“The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.”

Radio Free HPC Looks at Azure’s Move to GPUs and OCP for Deep Learning

In this podcast, the Radio Free HPC team looks at a set of IT and Science stories. Microsoft Azure is making a big move to GPUs and the OCP Platform as part of their Project Olympus. Meanwhile, Huawei is gaining market share in the server market and IBM is bringing storage to the atomic level.

Nvidia Brings AI to the Cloud with the HGX-1 Hyperscale GPU Accelerator

Today, Microsoft, NVIDIA, and Ingrasys announced a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. “Powered by eight NVIDIA Tesla P100 GPUs in each chassis, HGX-1 features an innovative switching design based on NVIDIA NVLink interconnect technology and the PCIe standard, enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations.”

Microsoft Releases Batch Shipyard on GitHub for Docker on Azure

“Available on GitHub as Open Source, the Batch Shipyard toolkit enables easy deployment of batch-style Dockerized workloads to Azure Batch compute pools. Azure Batch enables you to run parallel jobs in the cloud without having to manage the infrastructure. It’s ideal for parametric sweeps, Deep Learning training with NVIDIA GPUs, and simulations using MPI and InfiniBand.”