Nvidia Showcases Deep Learning Technology

Print Friendly, PDF & Email
Jen-Hsun Huang and Elon Musk

Jen-Hsun Huang and Elon Musk

Nvidia’s GPU Technology Conference (GTC), being held in San Jose California this week, showcased a combination of hardware and software aimed at driving the development of a branch of machine learning called ‘deep learning’.

The GTC is an annual festival of Nvidia products, so the first announcement inevitably was a new flagship GPU — the Nvidia GeForce GTX TITAN X — which was followed by the Digits Deep Learning GPU Training System and the Digits DevBox which combines four of these new GPUs and the software into one appliance.

Deep learning is a subset of machine learning which focuses on ‘teaching’ a system by using algorithms to learn multiple levels of representation in order to model complex relationships among data.

Nvidia hopes to encourage the creation of Deep Neural Networks (DNNs) using its open-source new software Digits. This software has been created to help data scientists and researchers to develop and visualize DNNs using a browser-based interface.

There are several applications of deep learning, including mage and video classification, computer vision, speech recognition, natural language processing, and audio recognition.

One example of using image recognition can be found on the Nvidia blog: “Deep learning lets a machine use this process to build a hierarchical representation. So, the first layer might look for simple edges. The next might look for collections of edges that form simple shapes like rectangles, or circles. The third might identify features like eyes and noses. After five or six layers, the neural network can put these features together. The result: a machine that can recognize faces.”

Nvidia GPUs can be very useful in these applications, in particular in image processing and recognition due to their highly parallel architecture which provides thousands of cores but also a high throughput of data.

The new GPU, the GTX Titan X, is likely to be one of the last cards built before the launch of the new Pascal architecture. It combines 3,072 processing cores for 7 teraflops of peak single-precision performance with 12GB of on-board memory; it also provides 336.5 GBps of memory bandwidth.

On AlexNet, an industry-standard model, for example, Titan X took less than three days to train the model using the 1.2 million image ImageNet dataset, compared with more than 40 days for a 16-core CPU.

One of the keynote speakers, CEO of Tesla motors Elon Musk, focused on the application of the technology to self-driving cars. They will use the image recognition capabilities of deep learning neural networks to process information about the environment around them.

Musk stated: “What NVIDIA is doing with Tegra is really interesting and really important for self-driving in the future.”

Nvidia introduced the Drive PX at the Consumer Electronics Show in January 2015. The Drive PX is a self-driving car computer designed to slip the power of deep neural networks into real-world cars. The computer features two Tegra X1 chips, has inputs for up to 12 high-resolution cameras, and can process up to 1.3 gigapixels per second.

I think it’s going to become normal, like an elevator,” Musk said. “There used to be elevator operators and then we came up with circuitry so the elevator knew to come to your floor. Cars will be like that.”

Hardware and software designed to help scientists create their own neural networks has been combined into the Digits DevBox, a deep learning appliance powered by four Nvidia Titian X GPUs. The system comes with Nvidia’ Digits software preinstalled to help users with the design, training, and visualization of deep neural networks.

Deep learning technology has many potential applications both within the technical computing, HPC and more general business computing markets. Although the technology is still in its infancy the convergence of big data and computing could provide an environment for this technology to thrive.

However Nvidia has already announced that the Pascal architecture will be available next year which promises 10 times speed up on what is currently available for performance of neural networks available today.

This story appears here as part of a cross-publishing agreement with Scientific Computing World.