MLPerf: Latest Results Highlight ‘More Capable ML Training’

Print Friendly, PDF & Email

Open engineering consortium MLCommons has released new results from MLPerf Training v2.0, which measures how fast various platforms train machine learning models. The organizations said the latest MLPerf Training results “demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems….”

As it has done with previous MLPerf resuls, NVIDIA and its partners lead on AI training performance and the most submissions across all benchmarks – including speech recognition, natural language processing, recommender systems, object detection, image classification and others – with 90 percent of all entries coming from the NVIDIA ecosystem.

To view the results and find additional information about the benchmarks visit

The results include over 250 performance results from 21 different submitters, including Azure, Baidu, Dell, Fujitsu, GIGABYTE, Google, Graphcore, HPE, Inspur, Intel-HabanaLabs, Lenovo, Nettrix, NVIDIA, Samsung, and Supermicro. In particular, MLCommons would like to congratulate first time MLPerf Training submitters ASUSTeK, CASIA, H3C, HazyResearch, Krai, and MosaicML.

The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications. The open-source and peer-reviewed benchmark suite provides a level playing field for competition that drives innovation, performance, and energy-efficiency for the entire industry.

In this round, MLPerf Training added a new object detection benchmark that trains the new RetinaNet reference model on the larger and more diverse Open Images dataset. This new test more accurately reflects state-of-the-art ML training for applications like collision avoidance for vehicles and robotics, retail analytics, and many others.

“I’m excited to release our new object detection benchmark, which was built based on extensive feedback from a customer advisory board and is an excellent tool for purchasing decisions, designing new accelerators and improving software,” said David Kanter, executive director of MLCommons.

“We are thrilled with the greater participation and the breadth, diversity, and performance of the MLPerf Training results,” said Eric Han, Co-Chair of the MLPerf Training Working Group. “We are especially excited about many of the novel software techniques highlighted in the latest round.”