Multi-GPU Cluster to Power Deep Learning Research at NYU

Print Friendly, PDF & Email

cloudOver at the Nvidia Blog, Kimberly Powell writes that New York University has just installed a new computing system for next generation deep learning research. Called “ScaLeNet,” the eight-node Cirrascale cluster is powered by 64 Nvidia Tesla K80 dual-GPU accelerators.

GPUs are the go-to technology for deep learning, reducing the time it takes to train neural networks by days, even months. But until now many researchers worked on systems with only one GPU. This limits the number of training parameters and the size of the models researchers can develop. By distributing the deep learning training process among many GPUs, researchers can increase the size of the models that can be trained and the number of models that can be tested. The result: more accurate models and new classes of applications. Recognizing this, NYU recently installed a new deep learning computing system — called “ScaLeNet.” It’s an eight-node Cirrascale cluster with 64 top-of-the-line NVIDIA Tesla K80 dual-GPU accelerators. The new high-performance system will let NYU researchers take on bigger challenges, and create deep learning models that let computers do human-like perceptual tasks.

ScaLeNet will be used for research projects and educational programs at CDS by a large community of faculty members, research scientists, postdoctoral fellows, and graduate students.

Yann LeCun, Facebook

Yann LeCun, Facebook

CDS has research projects that apply machine and deep learning to the physical, life and social sciences,” said CDS founder Yann LeCun, who is also Director of AI Research at Facebook . “This includes Bayesian models of cosmology and high-energy physics, computational models of the visual and motor cortex, deep learning systems for medical and biological image analysis, as well as machine-learning models of social behavior and economics.”

LeCun will present a paper on fast, multi-GPU implementation of convolutional networks in May at the International Conference on Learning Representations in San Diego.

Sign up for our insideHPC Newsletter.