@HPCpodcast: MLCommons’ David Kanter on AI Benchmarks and What They’re Telling Us

Special guest David Kanter of ML Commons joins us to discuss the critical importance AI performance metrics. In addition to the well-known MLPerf benchmark for AI training, ML Commons provides ….

Nvidia, AMD, Intel and Google Debut Chips in MLPerf Inference Benchmark for GenAI

Today, MLCommons announced new results for its industry-standard MLPerf Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and ….

Updated MLPerf AI Inference Benchmark Results Released

Today, MLCommons announced new results from its MLPerf Inference v4.0 benchmark suite with machine learning (ML) system performance benchmarking. To view the results for MLPerf Inference v4.0 visit the Datacenter and Edge results pages. The MLPerf Inference….

MLPerf Training and HPC Benchmark Show 49X Performance Gains in 5 Years

MLCommons said the results highlight performance gains of up to 2.8X compared to 5 months ago and 49X over the first results five years ago, “reflecting the tremendous rate of innovation in systems for machine learning,” the organizations said. 

MLCommons: MLPerf Results Show AI Performance Gains

Today ML Commons announced new results from two industry-standard MLPerf benchmark suites: Training v3.0, which measures the performance of training machine learning models, and Tiny v1.1, which measures how quickly a trained neural network can process new data for extremely low-power devices in the smallest form factors. To view the results and to find additional […]

MLCommons: Latest MLPerf AI Benchmark Results Show Machine Learning Inference Advances

SAN FRANCISCO – September 8, 2022 – Today, the open engineering consortium MLCommons announced results from MLPerf Inference v2.1, which analyzes the performance of inference — the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems. Here are the results and […]

MLPerf: Latest Results Highlight ‘More Capable ML Training’

Open engineering consortium MLCommons has released new results from MLPerf Training v2.0, which measures how fast various platforms train machine learning models. The organizations said the latest MLPerf Training results “demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems….” As it has done with previous […]

MLPerf Results Highlight Advances in Machine Learning Inference Performance and Efficiency

SAN FRANCISCO – April 6, 2022 – Today MLCommons, an open engineering consortium, released new results for three MLPerf benchmark suites – Inference v2.0, Mobile v2.0, and Tiny v0.7. MLCommons said the three benchmark suites measure the performance of inference – applying a trained machine learning model to new data. Inference enables adding intelligence to a wide range […]

MLCommons Releases MLPerf Training v1.1 AI Benchmarks

San Francisco — Dec. 1, 2021 – Today, MLCommons, the open engineering consortium, released new results for MLPerf Training v1.1, the organization’s machine learning training performance benchmark suite. MLPerf Training measures the time it takes to train machine learning models to a standard quality target in a variety of tasks including  image classification, object detection, […]

MLPerf Releases Results for HPC v1.0 ML Training Benchmark

Today, MLCommons released new results for MLPerf HPC v1.0, the organization’s machine learning training performance benchmark suite for high-performance computing (HPC). To view the results, visit https://mlcommons.org/en/training-hpc-10/. NVIDIA announced that NVIDIA-powered systems won four of five MLPerf HPC 1.0 tests, which measures training of AI models in three typical workloads for HPC centers: CosmoFlow estimates details […]