Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Cray Supercomputing Comes to Microsoft Azure

Today Cray announced a partnership with Microsoft to offer dedicated Cray supercomputing systems in Microsoft Azure. Under the partnership agreement, Microsoft and Cray will jointly engage with customers to offer dedicated Cray supercomputing systems in Microsoft Azure datacenters to enable customers to run AI, advanced analytics, and modeling and simulation workloads at unprecedented scale, seamlessly connected to the Azure cloud.

Intel Joins Open Neural Network Exchange

Jason Knight from Intel writes that the company has joined Microsoft, Facebook, and others to participate in the Open Neural Network Exchange (ONNX) project. “By joining the project, we plan to further expand the choices developers have on top of frameworks powered by the Intel Nervana Graph library and deployment through our Deep Learning Deployment Toolkit. Developers should have the freedom to choose the best software and hardware to build their artificial intelligence model and not be locked into one solution based on a framework. Deep learning is better when developers can move models from framework to framework and use the best hardware platform for the job.”

Intel FPGAs Accelerate Microsoft’s Project Brainwave

Today, Intel announced that its AI technology is being used by Microsoft to power their new accelerated deep learning platform, called Project Brainwave. “Project Brainwave achieves a major leap forward in both performance and flexibility for cloud-based serving of deep learning models. We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency. Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.”

Video: Deep Learning on Azure with GPUs

In this video, you’ll learn how to start submitting deep neural network (DNN) training jobs in Azure by using Azure Batch to schedule the jobs to your GPU compute clusters. “Previously, few people had access to the computing power for these scenarios. With Azure Batch, that power is available to you when you need it.”

Microsoft Acquires Cycle Computing

Today Microsoft announced it has acquired Cycle Computing, a software company focused on making cloud computing resources more readily available for HPC workloads. “Now supporting InfiniBand and accelerated GPU computing, Microsoft Azure looks to be a perfect home for Cycle Computing, which started its journey with software for aggregating compute resources at AWS. The company later added similar capabilities for Azure and Google Cloud.”

How Intel FPGAs Power Azure Deep Learning

Microsoft Azure CTO Mark Russinovich recently disclosed major advances in Microsoft’s hyperscale deployment of Intel field programmable gate arrays (FPGAs). These advances have resulted in the industry’s fastest public cloud network, and new technology for acceleration of Deep Neural Networks (DNNs) that replicate “thinking” in a manner that’s conceptually similar to that of the human brain.

Rock Stars of HPC: Karan Batta

From software developer at a small start-up in New Zealand, to Senior Program Manager at one of the largest multinational technology companies in the US, Karan Batta has led a career touched by HPC – even if he didn’t always realize it at the time. As the driving force behind the GPU Infrastructure vision, roadmap and deployment in Microsoft Azure, Karan Batta is a Rock Star of HPC.

Rambus Collaborates with Microsoft on Cryogenic Memory

“With the increasing challenges in conventional approaches to improving memory capacity and power efficiency, our early research indicates that a significant change in the operating temperature of DRAM using cryogenic techniques may become essential in future memory systems,” said Dr. Gary Bronner, vice president of Rambus Labs. “Our strategic partnership with Microsoft has enabled us to identify new architectural models as we strive to develop systems utilizing cryogenic memory. The expansion of this collaboration will lead to new applications in high-performance supercomputers and quantum computers.”

Overview of the HGX-1 AI Accelerator Chassis

“The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.”

Radio Free HPC Looks at Azure’s Move to GPUs and OCP for Deep Learning

In this podcast, the Radio Free HPC team looks at a set of IT and Science stories. Microsoft Azure is making a big move to GPUs and the OCP Platform as part of their Project Olympus. Meanwhile, Huawei is gaining market share in the server market and IBM is bringing storage to the atomic level.