In this video from the 2016 GPU Technology Conference, Rich Friedrich from Hewlett Packard Enterprise describes how the company makes it easier for Data Scientists to program GPUs.
“In April, HPE announced a public, open-source version of the platform called the Cognitive Computing Toolkit. Instead of relying on the traditional CPUs that power most computers, the Toolkit runs on graphics processing units (GPUs), inexpensive chips designed for video game applications.”
Deep learning algorithms are exploding in the industry as organizations are under competitive pressure to support the increasing sophistication of simulation and machine learning models. With up to eight high performance NVIDIA GPU cards designed for maximum transfer bandwidth, the HPE Apollo 6500 System is purpose-built for deep learning applications. Its high ratio of GPUs to CPUs, dense 4U form factor and efficient design enable organizations to run deep learning recommendation algorithms faster and more efficiently, significantly reducing model training time and accelerating the delivery of real-time results, all while controlling costs.
When used with comprehensive GPU computing platforms like the NVIDIA Tesla Accelerated Computing Platform, the HPE Apollo 6500 provides maximum GPU processing capacity across a broad ecosystem of tools. The HPE Apollo 6500 is designed to support deep learning computing platforms and application programming interface models, such as Caffe, CUDA, Torch, Theano, Tensorflow, the NVIDIA Deep Learning SDK, and the newly announced Cognitive Computing Toolkit from HPE.