Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Google Unveils 1st Public Cloud VMs using Nvidia Ampere A100 Tensor GPUs

Google today introduced the Accelerator-Optimized VM (A2) instance family on Google Compute Engine based on the NVIDIA Ampere A100 Tensor Core GPU, launched in mid-May. Available in alpha and with up to 16 GPUs, A2 VMs are the first A100-based offering in a public cloud, according to Google. At its launch, Nvidia said the A100, built on the company’s new Ampere architecture, delivers “the greatest generational leap ever,” according to Nvidia, enhancing training and inference computing performance by 20x over its predecessors.

Hardware & Software Platforms for HPC, AI and ML

Gunter Roeth from NVIDIA gave this talk at the UK HPC Conference. “Today, NVIDIA’s tensor core GPU sits at the core of most AI, ML and HPC applications, and NVIDIA software surrounds every level of such a modern application, from CUDA and libraries like cuDNN and NCCL embedded in every deep learning framework and optimized and delivered via the NVIDIA GPU Cloud to reference architectures designed to streamline the deployment of large scale infrastructures.”

GPUs for Oil and Gas Firms: Deriving Insights from Petabytes of Data

Adoption of GPU-accelerated computing can offer oil and gas firms significant ROI today and pave the way to gain additional advantage from future technical developments. To stay competitive, these companies need to be able to derive insights from petabytes of sensor, geolocation, weather, drilling, and seismic data in milliseconds. A new white paper from Penguin Computing explores how GPUs are spurring innovation and changing how hydrocarbon businesses address data processing needs.

GPUs Address Growing Data Needs for Finance & Insurance Sectors

A new whitepaper from Penguin Computing contends “a new era of supercomputing” has arrived — driven primarily by the emergence of graphics processing units or GPUs. The tools once specific to gaming are now being used by investment and financial services to gain greater insights and generate actionable data. Learn how GPUs are spurring innovation and changing how today’s finance companies address their data processing needs. 

How Financial Services Can Fuel Innovation with GPU Computing

The financial services and insurance sector is one of the most data-intensive industries in modern business. Unfortunately, that abundance of information has hindered the extraction of business value from data. However, improvements in technology can take data-related challenges that had, until recently, been considered impossible to overcome. Download the new white paper from Penguin Computing that highlights how financial services and insurance firms can benefit from GPU computing and spur innovation and future technological developments. 

Rapids: Data Science on GPUs

Christoph Angerer from NVIDIA gave this talk at FOSDEM’19. “The next big step in data science will combine the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions while taking advantage of the same technology that enables dramatic increases in speed in deep learning. This session highlights the progress that has been made on RAPIDS, discusses how you can get up and running doing data science on the GPU, and provides some use cases involving graph analytics as motivation.”

The Simulation of the Behavior of the Human Brain using CUDA

Pedro Valero-Lara from BSC gave this talk at the GPU Technology Conference. “The attendees can learn about how the behavior of Human Brain is simulated by using current computers, and the different challenges which the implementation has to deal with. We cover the main steps of the simulation and the methodologies behind this simulation. In particular we highlight and focus on those transformations and optimizations carried out to achieve a good performance on NVIDIA GPUs.”

DNN Implementation, Optimization, and Challenges

This is the third in a five-part series that explores the potential of unified deep learning with CPU, GPU and FGPA technologies. This post explores DNN implementation, optimization and challenges. 

NVIDIA Makes GPU Computing Easier in the Cloud

Setting up an environment for High Performance Computing (HPC) especially using GPUs can be daunting. There can be multiple dependencies, a number of supporting libraries required, and complex installation instructions. NVIDIA has made this easier with the announcement and release of HPC Application Containers with the NVIDIA GPU Cloud.

Michael Wolfe Presents: Why Iteration Space Tiling?

In this Invited Talk from SC17, Michael Wolfe from NVIDIA presents: Why Iteration Space Tiling? The talk is based on his noted paper, which won the SC17 Test of Time Award. “Tiling is well-known and has been included in many compilers and code transformation systems. The talk will explore the basic contribution of the SC1989 paper to the current state of iteration space tiling.”