MLCommons: Latest MLPerf AI Benchmark Results Show Machine Learning Inference Advances

Print Friendly, PDF & Email

SAN FRANCISCO – September 8, 2022 – Today, the open engineering consortium MLCommons announced results from MLPerf Inference v2.1, which analyzes the performance of inference — the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems.

Here are the results and additional information about the benchmarks.

MLCommons said this round established new benchmarks with nearly 5,300 performance results and 2,400 power measures, 1.37X and 1.09X more than the previous round, respectively, “reflecting the community’s vigor.”

The MLPerf Inference benchmarks are focused on datacenter and edge systems, and Alibaba, ASUSTeK, Azure, Biren, Dell, Fujitsu, GIGABYTE, H3C, HPE, Inspur, Intel, Krai, Lenovo, Moffett, Nettrix, Neural Magic, NVIDIA, OctoML, Qualcomm Technologies, Inc., SAPEON, and Supermicro are among the contributors to the submission round.

MLPerf benchmarks are comprehensive system tests that stress machine learning models, software, and hardware, and optionally monitor energy consumption, according to MLCommons. The open-source and peer-reviewed benchmark suites are designed to level the playing ground for competitiveness, which fosters innovation, performance, and energy efficiency for the whole sector.

“We are very excited with the growth in the ML community and welcome new submitters across the globe such as Biren, Moffett AI, Neural Magic, and SAPEON,” said ML Commons Executive Director David Kanter. “The exciting new architectures all demonstrate the creativity and innovation in the industry designed to create greater AI functionality that will bring new and exciting capability to business and consumers alike.”