Ahh, benchmarks. Perhaps nothing is better fodder for an argument in IT, especially when competitors are locking horns over undiscovered country.
The performance claims in dispute include:
- Xeon Phi is 2.3x faster in training than GPUs
- Xeon Phi offers 38% better scaling that GPUs across nodes
- Xeon Phi delivers strong scaling to 128 nodes while GPUs do not
According to Buck, Intel is comparing its new Intel Xeon Phi processor against outdated GPUs using Nvidia’s Maxwell architecture. Nvidia has since introduced products based on the Pascal architecture, which helped propel the company to some nice profits in their most recent quarter.
Few fields are moving faster right now than deep learning,” writes Buck. “Today’s neural networks are 6x deeper and more powerful than just a few years ago. There are new techniques in multi-GPU scaling that offer even faster training performance. In addition, our architecture and software have improved neural network training time by over 10x in a year by moving from Kepler to Maxwell to today’s latest Pascal-based systems, like the DGX-1 with eight Tesla P100 GPUs. So it’s understandable that newcomers to the field may not be aware of all the developments that have been taking place in both hardware and software.”
While we don’t have the data to take sides, one thing is for sure: the machine learning space is heating up. Nvidia has a big head start in machine learning, but Intel has gotten religion in a big way and they are formidable competitors.
The good news is that competition like this will surely result in better, faster machine learning systems for customers. Your mileage may vary.