Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Nvidia Dominates MLPerf AI Benchmark Competition Again

Nvidia said it has extended its lead on the MLPerf Benchmark for AI inference with the company’s A100 GPU chip introduced earlier this year. Nvidia won each of the six application tests for data center and edge computing systems in the second version of MLPerf Inference. These tests are an expansion beyond the initial two computer vision benchmarks – AI tests now include recommendation systems, natural language understanding, speech recognition and medical imaging.

The full results of MLPerf 0.7 can be found here.

Impressive as Nvidia’s showing was, it should also be noted that some companies that might have been expected to participate in the MLPerf competition did not, something commented on by Karl Freund, senior analyst, HPC and machine learning, at industry watcher Moor Insights & Strategy.

“Nvidia did great against a shallow field of competitors,” Freund said. “Their A100 results were amazing, compared to the (Nvidia) V100(GPU),  demonstrating the value of their enhanced Tensor core architecture.  And I commend MLPerf for adding new benchmarks that are increasingly representative of fast-growing inference opportunities, such as recommendation engines.

“That being said, the competition is either too busy with early customer projects or their chips are just not yet ready,” said Freund. “For example, SambaNova (AI systems platform) announced a new partnership with LLNL, and Intel Habana (programmable deep learning accelerator) is still in the oven. If I were still at a chip startup, I would wait to run MLPerf (an expensive project) until I already had secured a few lighthouse customers.

In its announcement of the results, Nvidia said its A100 delivered up to 237x faster AI Inference than CPUs.

“We’re at a tipping point as every industry seeks better ways to apply AI to offer new services and grow their business,” said Ian Buck, general manager and vice president of Accelerated Computing at Nvidia. “The work we’ve done to achieve these results on MLPerf gives companies a new level of AI performance to improve our everyday lives.”

Nvidia said the company and its partners submitted MLPerf 0.7 results using Nvidia’s acceleration platform that includes Nvidia data center GPUs, edge AI accelerators and Nvidia optimized software. Nvidia A100, introduced earlier this year and featuring third-generation Tensor cores and multi-instance GPU technology, increased its lead on the ResNet-50 test, beating CPUs by 30x versus 6x in the last round. The company added that for the first time, its GPUs offer more AI inference capacity in public clouds than CPUs and said that t Total cloud AI inference capacity on Nvidia GPUs has grown about 10x every two years.

Leave a Comment

*

Resource Links: