Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

GPUs for Oil and Gas Firms: Deriving Insights from Petabytes of Data

Adoption of GPU-accelerated computing can offer oil and gas firms significant ROI today and pave the way to gain additional advantage from future technical developments. To stay competitive, these companies need to be able to derive insights from petabytes of sensor, geolocation, weather, drilling, and seismic data in milliseconds. A new white paper from Penguin Computing explores how GPUs are spurring innovation and changing how hydrocarbon businesses address data processing needs.

GPUs Address Growing Data Needs for Finance & Insurance Sectors

A new whitepaper from Penguin Computing contends “a new era of supercomputing” has arrived — driven primarily by the emergence of graphics processing units or GPUs. The tools once specific to gaming are now being used by investment and financial services to gain greater insights and generate actionable data. Learn how GPUs are spurring innovation and changing how today’s finance companies address their data processing needs. 

How Financial Services Can Fuel Innovation with GPU Computing

The financial services and insurance sector is one of the most data-intensive industries in modern business. Unfortunately, that abundance of information has hindered the extraction of business value from data. However, improvements in technology can take data-related challenges that had, until recently, been considered impossible to overcome. Download the new white paper from Penguin Computing that highlights how financial services and insurance firms can benefit from GPU computing and spur innovation and future technological developments. 

Rapids: Data Science on GPUs

Christoph Angerer from NVIDIA gave this talk at FOSDEM’19. “The next big step in data science will combine the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions while taking advantage of the same technology that enables dramatic increases in speed in deep learning. This session highlights the progress that has been made on RAPIDS, discusses how you can get up and running doing data science on the GPU, and provides some use cases involving graph analytics as motivation.”

The Simulation of the Behavior of the Human Brain using CUDA

Pedro Valero-Lara from BSC gave this talk at the GPU Technology Conference. “The attendees can learn about how the behavior of Human Brain is simulated by using current computers, and the different challenges which the implementation has to deal with. We cover the main steps of the simulation and the methodologies behind this simulation. In particular we highlight and focus on those transformations and optimizations carried out to achieve a good performance on NVIDIA GPUs.”

DNN Implementation, Optimization, and Challenges

This is the third in a five-part series that explores the potential of unified deep learning with CPU, GPU and FGPA technologies. This post explores DNN implementation, optimization and challenges. 

Michael Wolfe Presents: Why Iteration Space Tiling?

In this Invited Talk from SC17, Michael Wolfe from NVIDIA presents: Why Iteration Space Tiling? The talk is based on his noted paper, which won the SC17 Test of Time Award. “Tiling is well-known and has been included in many compilers and code transformation systems. The talk will explore the basic contribution of the SC1989 paper to the current state of iteration space tiling.”

Rock Stars of HPC: DK Panda

As our newest Rock Star of HPC, DK Panda sat down with us to discuss his passion for teaching High Performance Computing. “During the last several years, HPC systems have been going through rapid changes to incorporate accelerators. The main software challenges for such systems have been to provide efficient support for programming models with high performance and high productivity. For NVIDIA-GPU based systems, seven years back, my team introduced a novel `CUDA-aware MPI’ concept. This paradigm allows complete freedom to application developers for not using CUDA calls to perform data movement.”

Podcast: Geoffrey Hinton on the Rise of Deep Learning

“In Deep Learning what we do is try to minimize the amount of hand engineering and get the neural nets to learn, more or less, everything. Instead of programing computers to do particular tasks, you program the computer to know how to learn. And then you can give it any old task, and the more data and the more computation you provide, the better it will get.”

HPC News with Snark for the Week of Jan. 12, 2015

The news has started to pile up this post-Holiday Season, so here is the HPC News with Snark for Friday, January 16, 2014. We’ve got podcasts on everything form self-driving cars to Data Breaches resulting from North Korean satire films. There’s even some big financial surprises from Intel.