Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Replay: GTC Keynote on Deep Learning at Google

gtc-logo

In this video, Jeff Dean, Senior Fellow at Google. Over the past few years, Google has built large-scale computer systems for training neural networks, and then applied these systems to a wide variety of problems that have traditionally been very difficult for computers. We have made significant improvements in the state-of-the-art in many of these areas, and our software systems and algorithms have been used by dozens of different groups at Google to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, language translation, and many other tasks. In this talk, Dean will highlight some of the distributed systems and algorithms that we use in order to train large models quickly. I’ll then discuss ways in which we have applied this work to a variety of problems in Google’s products, usually in close collaboration with other teams. This talk will also describe joint work with many people at Google.

Update: Nvidia has posted a nice recap of the Google talk.

Sign up for our insideHPC Newsletter.

Resource Links: