Podcast: Geoffrey Hinton on the Rise of Deep Learning

Print Friendly, PDF & Email

CHY4y4HUcAEq0MRIn this podcast, Dr. Geoffrey Hinton from Google describes his ideas on Deep Learning. Known as the Godfather of neural networks, Hinton’s work centers around using Deep Learning to get computers to work the way our brains do.

In Deep Learning what we do is try to minimize the amount of hand engineering and get the neural nets to learn, more or less, everything. Instead of programing computers to do particular tasks, you program the computer to know how to learn. And then you can give it any old task, and the more data and the more computation you provide, the better it will get.

According to the Wikipedia, an accessible introduction to Geoffrey Hinton’s research can be found in his articles in Scientific American in September 1992 and October 1993. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has authored over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm for training multi-layer neural networks that has been widely used for practical applications. He co-invented Boltzmann machines with Terry Sejnowski. His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and Product of Experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.

As reported here, the recent GPU Technology Conference centered around Deep Learning. To get an idea on how Nvidia plans to grow this market, check out our slidecast with Stephen Jones.

Download the MP3Sign up for our insideHPC Newsletter.

Comments

  1. Why aren’t your podcasts able to work without flash player and javascript?

  2. Greetings from HN. Thank you for providing a link.