Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Radio Free HPC Looks at Azure’s Move to GPUs and OCP for Deep Learning

In this podcast, the Radio Free HPC team looks at a set of IT and Science stories. Microsoft Azure is making a big move to GPUs and the OCP Platform as part of their Project Olympus. Meanwhile, Huawei is gaining market share in the server market and IBM is bringing storage to the atomic level.

Interview: XTREME DESIGN Automates HPC Cloud Configurations

Tokyo-based Startup XTREME DESIGN recently announced it has raised $700K of funding in its pre-series A round. Launched in early 2015, the Startup’s XTREME DNA software automates the process of configuring, deploying, and monitoring virtual supercomputers on public clouds. To learn more, we caught up with the company’s founder, Naoki Shibata.

Nvidia Brings AI to the Cloud with the HGX-1 Hyperscale GPU Accelerator

Today, Microsoft, NVIDIA, and Ingrasys announced a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. “Powered by eight NVIDIA Tesla P100 GPUs in each chassis, HGX-1 features an innovative switching design based on NVIDIA NVLink interconnect technology and the PCIe standard, enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations.”

Supercomputing the Hyperloop on Azure

Today Cycle Computing announced that the HyperXite team is using CycleCloud software to manage Hyperloop simulations using ANSYS Fluent on the Azure Cloud. “Our mission is optimize and economize the transportation of the future and Cycle Computing has made that endeavor so much easier, said Nima Mohseni, Simulation Lead, HyperXite. “We absolutely require a solution that can compress and condense our timeline while providing the powerful computational results we require. Thank you to Cycle Computing for making a significant difference in our ability to complete our work.”

Microsoft Releases Batch Shipyard on GitHub for Docker on Azure

“Available on GitHub as Open Source, the Batch Shipyard toolkit enables easy deployment of batch-style Dockerized workloads to Azure Batch compute pools. Azure Batch enables you to run parallel jobs in the cloud without having to manage the infrastructure. It’s ideal for parametric sweeps, Deep Learning training with NVIDIA GPUs, and simulations using MPI and InfiniBand.”

Video: Rescale Night Showcases HPC in the Cloud

“Billed as an exposition into ‘The Future of Cloud HPC Simulation,’ the event brought together experts in high-performance computing and simulation, cloud computing technologists, startup founders, and VC investors across the technology landscape. In addition to product demonstrations with Rescale engineers, including the popular Deep Learning workshop led by Mark Whitney, Rescale Director of Algorithms, booths featuring ANSYS, Microsoft Azure, Data Collective, and Microsoft Ventures offered interactive sessions for attendees.”

UberCloud Obtains $1.7 Million in Pre-A Funding Round

“UberCloud has created an entire cloud computing ecosystem for complex technical simulations, leveraging cloud infrastructure providers, developing and utilizing middleware container technology, and bringing on board established and proven application software providers, all for the benefit of a growing community of engineers and scientists that need to solve critical technical problems on demand,” said Roland Manger, co-founder and Partner at Earlybird. “While technical computing has been slow to adopt the benefits of the Cloud, we are convinced that UberCloud can be a catalyst for change.”

CUDA Made Easy: An Introduction

“CUDA C++ is just one of the ways you can create massively parallel applications with CUDA. It lets you use the powerful C++ programming language to develop high performance algorithms accelerated by thousands of parallel threads running on GPUs. Many developers have accelerated their computation- and bandwidth-hungry applications this way, including the libraries and frameworks that underpin the ongoing revolution in artificial intelligence known as Deep Learning.”

New Site Lists all Comparable Features from AWS, Azure, and Google Cloud

Are you shopping for Public Cloud services? A new Public Cloud Services Comparison site gives a service & feature level mapping between the 3 major public clouds: Amazon Web Service, Microsoft Azure & Google Cloud. Published by Ilyas F, a Cloud Solution Architect at Xebia Group, the Public Cloud Services Comparison is a handy reference manual to help anyone to quickly learn the alternate features & services between clouds.

Video: Azure High Performance Computing

“Run your Windows and Linux HPC applications using high performance A8 and A9 compute instances on Azure, and take advantage of a backend network with MPI latency under 3 microseconds and non-blocking 32 Gbps throughput. This backend network includes remote direct memory access (RDMA) technology on Windows and Linux that enables parallel applications to scale to thousands of cores. Azure provides you with high memory and HPC-class CPUs to help you get results fast. Scale up and down based upon what you need and pay only for what you use to reduce costs.”