Sign up for our newsletter and get the latest big data news and analysis.

The Core Technologies for Deep Learning

This is the second article in a series taken from the inside HPC Guide to The Industrialization of Deep Learning.

Given the compute and data intensive nature of deep learning which has significant overlaps with the needs of the high performance computing market, the TOP500 list provides a good proxy of the current market dynamics and trends.

From the central computation perspective, today’s multicore processor architectures dominate the TOP500 with 91% based on Intel processors. However, looking forwards we can expect to see further developments that may include core CPU architectures such as OpenPOWER and ARM. In addition System on a Chip approaches that combine general purpose processors with technologies such as field programmable gate arrays (FPGAs) and digital signal processors (DSPs) can be expected to play an increasing role in deep learning applications.

Beyond primary processor capabilities, math accelerators as commonly characterized by NVIDIA GPUs, Intel Xeon Phi and others are critical to the requirements of deep convolutional neural networks. They accelerate backpropagation functions that optimize learned weights, and significantly reducing training times. These technologies are typically dependent upon the use of associated parallel programming environments and application programming interfaces (APIs) such as CUDA from NVIDIA, Parallel Studio XE 2016 from Intel, or heterogeneous environments such as OpenCL.

Software frameworks and toolkits are perhaps the most significant elements in delivering an effective deep learning environment, and  this is a rapidly developing area driven in part by the availability of suitable and affordable platforms being brought to market such as the HPE Apollo 6500 System or NVIDIA DGX-1 that are designed to support deep learning environments and other compute intense workloads.  There are number of popular open source deep learning toolkits gaining traction including:

  • Caffe from Berkeley Vision and Learning Center
  • Torch supported by Facebook, Twitter, Google and others
  • Theano from the Université de Montréal
  • Tensorflow from Google
  • The NVIDIA Deep Learning SDK
  • The HPE Cognitive Computing Toolkit from Hewlett Packard Labs

With all of these technology components coming together we are on the cusp of a new era of cognitive computing that can elevate deep learning and artificial intelligence systems beyond the research level and high profile media events. Examples such as IBM’s Watson system appearing on and winning the TV quiz program Jeopardy or Google’s DeepMInd AlphaGo system convincingly beating one of the world’s leading Go champions in a game that is several orders of magnitude more complex than chess are extremely impressive. These things prove the opportunities for the main stream adoption of deep learning systems, yet are not sufficient to in themselves to support main stream enterprise adoption.

In coming weeks, this series will consist of articles that explore:

If you prefer you can download the complete inside HPC Guide to The Industrialization of Deep Learning courtesy of Hewlett Packard Enterprise

Resource Links: