Nvidia cuDNN Speeds Deep Learning Applications

Print Friendly, PDF & Email

CUDANNLOGOToday Nvidia announced cuDNN, new software designed to help developers harness the power of GPU acceleration for deep learning applications in areas such as image classification, video analytics, speech recognition and natural language processing.

As a powerful programming library based on the CUDA parallel programming model, Nvidia cuDNN uses GPUs to accelerate deep learning training processes by up to 10x compared to CPU-only methods. Featuring an easy-to-deploy, drop-in design, cuDNN allows developers to rapidly develop and optimize new training models and build more accurate applications using GPU accelerators.

Deep learning is one of the fastest growing segments of the machine learning field. It involves training computers to teach themselves by sifting through massive amounts of data. For example, learning to identify a dog by analyzing lots of images of dogs, ferrets, jackals, raccoons and other animals. But, deep learning algorithms also depend on massive amounts of computing power to process mountains of data. This can require thousands of CPU-based servers, but that’s expensive, unrealistic and impractical. But not for GPUs. The high-performance parallel processors crunch through a broad variety of visual computing problems quickly and efficiently.

Sign up for our insideHPC Newsletter.