“Deep learning developers and researchers want to train neural networks as fast as possible. Right now we are limited by computing performance,” said Dr. Diamos. “The first step in improving performance is to measure it, so we created DeepBench and are opening it up to the deep learning community. We believe that tracking performance on different hardware platforms will help processor designers better optimize their hardware for deep learning applications.”
Oak Ridge National Lab is hosting a 3-day GPU Mini-hackathon led by experts from the OLCF and Nvidia. The event takes place Nov. 1-3 in Knoxville, Tennessee. “General-purpose Graphics Processing Units (GPGPUs) potentially offer exceptionally high memory bandwidth and performance for a wide range of applications. The challenge in utilizing such accelerators has been the difficulty in programming them. This event will introduce you to GPU programming techniques.”
In this video from GTC 2016 in Taiwan, Nvidia CEO Jen-Hsun Huang unveils technology that will accelerate the deep learning revolution that is sweeping across industries. “AI computing will let us create machines that can learn and behave as humans do. It’s the reason why we believe this is the beginning of the age of AI.”
Today TYAN announced support and availability of the NVIDIA Tesla P100, P40 and P4 GPU accelerators with the new NVIDIA Pascal architecture. Incorporating NVIDIA’s state-of-the-art technologies allows TYAN to offer the exceptional performance and data-intensive applications features to HPC users.
Today the University of Alabama at Birmingham unveiled a new supercomputer powered by Dell. With a peak performance of 110 Teraflops, the system is 10 times faster than its predecessor. “With their new Dell EMC HPC cluster, UAB researchers will have the compute and storage they need to aggressively research, uncover and apply knowledge that changes the lives of individuals and communities in many areas, including genomics and personalized medicine.”
Over at the Nvidia Blog, Jamie Beckett writes that the company’s is expanding its Deep Learning Institute with Microsoft and Coursera. The institute provides training to help people apply deep learning to solve challenging problems.
Nvidia’s GPU platforms have been widely used on the training side of the Deep Learning equation for some time now. Today the company announced a new Pascal-based GPU tailor-made for the inferencing side of Deep Learning workloads. “With the Tesla P100 and now Tesla P4 and P40, NVIDIA offers the only end-to-end deep learning platform for the data center, unlocking the enormous power of AI for a broad range of industries,” said Ian Buck, general manager of accelerated computing at NVIDIA.”
Humans are very good at visual pattern recognition especially when it comes to facial features and graphic symbols and identifying a specific person or associating a specific symbol with an associated meaning. It is in these kinds of scenarios where deep learning systems excel. Clearly identifying each new person or symbol is more efficiently achieved by a training methodology than by needing to reprogram a conventional computer or explicitly update database entries.
Today One Stop Systems (OSS) announced that its High Density Compute Accelerator (HDCA) and its Express Box 3600 (EB3600) are now available for purchase with the NVIDIA Tesla P100 for PCIe GPU. These high-density platforms deliver teraflop performance with greatly reduced cost and space requirements. The HDCA supports up to 16 Tesla P100s and the EB3600 supports up to 9 Tesla P100s. The Tesla P100 provides 4.7 TeraFLOPS of double-precision performance, 9.3 TeraFLOPS of single-precision performance and 18.7 TeraFLOPS of half-precision performance with NVIDIA GPU BOOST technology.