With the advances in computing that have become available with new processors, new systems can be developed that are the center of a new generation of deep learning algorithms. New processors can aid in the creation of algorithms that ingest tremendous amounts of data and model abstractions with data.
An environment that assists in deep learning usually consists of algorithms that can draw conclusions from data that is run at very high speeds. Processors such as the Intel Xeon Phi Processor that contain a significant number of processing cores and operate in a SIMD mode are critical to these new environments. With the Intel Xeon Phi processor, new insights can be discovered from either existing data or new data sources.
A popular deep learning example is in the area of computer vision and image processing. By creating algorithms that can be trained with limited human intervention to recognize lettering on a building, by using more data, this system could be taught to recognize the same pattern when visibility is not good or when the sign is not legible. The key here is that these algorithms need to be run extremely fast with a large dataset, which only accelerated (separate from the main CPU) products can currently handle.
Computer hardware should be designed to be flexible to handle different tasks at high rates. For deep learning, this may require that workloads be able to use various precisions for the data representation. In some algorithms, more data but less precise data will be critical to efficient execution of the designed algorithm. For some domains where deep learning will make a difference, single or double precision operations will be the norm, but in other cases, less precision may be ideal.
Deep learning requires an eco-system of hardware and software environments that make it easy to create innovative applications that perform well. Having just one or the other will not be endearing to lead edge deep learning developers.