Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NVIDIA Tops MLPerf AI Inference Benchmarks

Today NVIDIA posted the fastest results on new benchmarks measuring the performance of AI inference workloads in data centers and at the edge — building on the company’s equally strong position in recent benchmarks measuring AI training. “NVIDIA topped all five benchmarks for both data center-focused scenarios (server and offline), with Turing GPUs providing the highest performance per processor among commercially available entries.”

MLPerf Releases Over 500 Inference Benchmarks

Today the MLPerf consortium released over 500 inference benchmark results from 14 organizations. “Having independent benchmarks help customers understand and evaluate hardware products in a comparable light. MLPerf is helping drive transparency and oversight into machine learning performance that will enable vendors to mature and build out the AI ecosystem. Intel is excited to be part of the MLPerf effort to realize the vision of AI Everywhere,” stated Dr Naveen Rao, Corp VP Intel, GM AI Products.

Google Cloud and NVIDIA Set New Training Records on MLPerf v0.6 Benchmark

Today the MLPerf effort released results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. MLPerf is a consortium of over 40 companies and researchers from leading universities, and the MLPerf benchmark suites are rapidly becoming the industry standard for measuring machine learning performance. “We are creating a common yardstick for training and inference performance,” said Peter Mattson, MLPerf General Chair.

New MLPerf Benchmark Measures Machine Learning Inference Performance

Today a consortium involving over 40 leading companies and university researchers introduced MLPerf Inference v0.5, the first industry standard machine learning benchmark suite for measuring system performance and power efficiency. “Our goal is to create common and relevant metrics to assess new machine learning software frameworks, hardware accelerators, and cloud and edge computing platforms in real-life situations,” said David Kanter, co-chair of the MLPerf inference working group. “The inference benchmarks will establish a level playing field that even the smallest companies can use to compete.”