Researchers Deploy GPUs To Build World's Largest Artificial Neural Network

Print Friendly, PDF & Email

Today at ISC’13, Nvidia announced a collaboration with Stanford University to create the world’s largest artificial neural network built to model how the human brain learns. At 6.5 times bigger than the previous record-setting network developed by Google in 2012, the neural net will be capable of “learning” how to model the behavior of the brain — including recognizing objects, characters, voices and audio in the same way that humans do.

Creating large-scale neural networks is extremely computationally expensive. For example, Google used approximately 1,000 CPU-based servers, or 16,000 CPU cores, to develop its neural network, which taught itself to recognize cats in a series of YouTube videos. The network included 1.7 billion parameters, the virtual representation of connections between neurons. In contrast, the Stanford team, led by Andrew Ng, director of the university’s Artificial Intelligence Lab, created an equally large network with only three servers using Nvidia GPUs to accelerate the processing of the big data generated by the network. With 16 NVIDIA GPU-accelerated servers, the team then created an 11.2 billion-parameter neural network.

Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modeling to the masses,” said Sumit Gupta, general manager of the Tesla Accelerated Computing Business Unit at NVIDIA. “Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.”

Read the Full Story.

Comments

  1. Sherlock Ohms says

    Any chance iHPC will be putting up video from the ISC’s Human Brain Project presentation?

    • We are told that the Human Brain Project session was taped by ISC, so we’ll post it as soon as it becomes available.