Video: The Convergence of HPC and Deep Learning

Print Friendly, PDF & Email

In this video from the 2018 Swiss HPC Conference, Axel Koehler from NVIDIA presents: The Convergence of HPC and Deep Learning.

“The intersection of AI and HPC is extending the reach of science and accelerating the pace of scientific innovation like never before. The technology originally developed for HPC has enabled deep learning, and deep learning is enabling many usages in science. Deep learning is also helping deliver real-time results with models that used to take days or months to simulate. The presentation will give an overview about the latest hard- and software developments for HPC and Deep Learning from NVIDIA and will show some examples that Deep Learning can be combined with traditional large scale simulations.”

Axel Koehler is a Principal Solution Architect at NVIDIA. He designs solutions for HPC and ML/DL environments using the NVIDIA GPU software ecosystem and the Tesla server products and supports customers, OEMs/partners and ISVs in using the GPU technology. Prior to joining NVIDIA in January 2011, Axel worked at Sun Microsystems for 14 years in the global HPC team as lead architect. Axel holds a diploma degree in computer science from the Technical University of Dresden.

See more talks at the Swiss HPC Conference Video Gallery

Check out our insideHPC Events Calendar

Comments

  1. Valentin Senicourt says

    Hello Rich,

    Thank you for sharing. I have 1-2 little questions for you:

    – Is there an easy way to download the slides themselves ?
    While I can see them being available in slideshare, I can’t seem to be able to download the presentation as a single pdf. I poked around in Google to try to grab them from the Conference website (http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php) but it doesn’t seem to be available at the moment.

    – What do we know about Intel x86 NVLINK support? While there’s already much to gain from using it for GPU-G{U communication, a lot of applications with more common CPU-GPU interactions would still benefit from it.