Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

The Simulation of the Behavior of the Human Brain using CUDA

Pedro Valero-Lara from BSC gave this talk at the GPU Technology Conference. “The attendees can learn about how the behavior of Human Brain is simulated by using current computers, and the different challenges which the implementation has to deal with. We cover the main steps of the simulation and the methodologies behind this simulation. In particular we highlight and focus on those transformations and optimizations carried out to achieve a good performance on NVIDIA GPUs.”

DNN Implementation, Optimization, and Challenges

This is the third in a five-part series that explores the potential of unified deep learning with CPU, GPU and FGPA technologies. This post explores DNN implementation, optimization and challenges. 

Michael Wolfe Presents: Why Iteration Space Tiling?

In this Invited Talk from SC17, Michael Wolfe from NVIDIA presents: Why Iteration Space Tiling? The talk is based on his noted paper, which won the SC17 Test of Time Award. “Tiling is well-known and has been included in many compilers and code transformation systems. The talk will explore the basic contribution of the SC1989 paper to the current state of iteration space tiling.”

Rock Stars of HPC: DK Panda

As our newest Rock Star of HPC, DK Panda sat down with us to discuss his passion for teaching High Performance Computing. “During the last several years, HPC systems have been going through rapid changes to incorporate accelerators. The main software challenges for such systems have been to provide efficient support for programming models with high performance and high productivity. For NVIDIA-GPU based systems, seven years back, my team introduced a novel `CUDA-aware MPI’ concept. This paradigm allows complete freedom to application developers for not using CUDA calls to perform data movement.”

Podcast: Geoffrey Hinton on the Rise of Deep Learning

“In Deep Learning what we do is try to minimize the amount of hand engineering and get the neural nets to learn, more or less, everything. Instead of programing computers to do particular tasks, you program the computer to know how to learn. And then you can give it any old task, and the more data and the more computation you provide, the better it will get.”

HPC News with Snark for the Week of Jan. 12, 2015

The news has started to pile up this post-Holiday Season, so here is the HPC News with Snark for Friday, January 16, 2014. We’ve got podcasts on everything form self-driving cars to Data Breaches resulting from North Korean satire films. There’s even some big financial surprises from Intel.

Interview: The Evolving OpenACC for HPC Accelerators

Over at the Cray Blog, David Wallace looks at OpenACC, its programming benefits, and how it is evolving as an industry coalition. OpenACC allows HPC programmers to worry more about the problem they are trying to solve and less about the language and hardware they are using to solve the problem. By enabling parallelism via […]