Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

MLPerf Results Highlight Advances in Machine Learning Inference Performance and Efficiency

SAN FRANCISCO – April 6, 2022 – Today MLCommons, an open engineering consortium, released new results for three MLPerf benchmark suites – Inference v2.0, Mobile v2.0, and Tiny v0.7. MLCommons said the three benchmark suites measure the performance of inference – applying a trained machine learning model to new data. Inference enables adding intelligence to a wide range […]

MLCommons Releases MLPerf Training v1.1 AI Benchmarks

San Francisco — Dec. 1, 2021 – Today, MLCommons, the open engineering consortium, released new results for MLPerf Training v1.1, the organization’s machine learning training performance benchmark suite. MLPerf Training measures the time it takes to train machine learning models to a standard quality target in a variety of tasks including  image classification, object detection, […]

MLPerf Releases Results for HPC v1.0 ML Training Benchmark

Today, MLCommons released new results for MLPerf HPC v1.0, the organization’s machine learning training performance benchmark suite for high-performance computing (HPC). To view the results, visit https://mlcommons.org/en/training-hpc-10/. NVIDIA announced that NVIDIA-powered systems won four of five MLPerf HPC 1.0 tests, which measures training of AI models in three typical workloads for HPC centers: CosmoFlow estimates details […]

MLCommons Releases MLPerf Inference v1.1 Results 

San Francisco – September 22, 2021 – Today, MLCommons, an open engineering consortium, released new results for MLPerf Inference v1.1, the organization’s machine learning inference performance benchmark suite. MLPerf Inference measures the performance of applying a trained machine learning model to new data for a wide variety of applications and form factors, and optionally includes […]

Nvidia Romps in Latest MLPerf AI Benchmark Results

What has become Nvidia’s regular romp through MLPerf AI benchmarks continued with today’s release of the latest performance measurements across a range of inference workloads, including computer vision, medical imaging, recommender systems, speech recognition and natural language processing. The latest benchmark round received submissions from 17 organizations and includes 1,994 performance and 862 power efficiency […]

GIGABYTE Joins MLCommons to Accelerate the Machine Learning Community

Taipei, Taiwan, December 22nd 2020 – GIGABYTE Technology, (TWSE: 2376), a maker of high-performance servers and workstations, today announced GIGABYTE as one of the founding members of MLCommons, an open engineering consortium with the goal of accelerating machine learning with benchmarking, large-scale open data sets, and best practices that are community-driven. In 2018, a group […]

Nvidia Dominates MLPerf AI Benchmark Competition Again

Nvidia said it has extended its lead on the MLPerf Benchmark for AI inference with the company’s A100 GPU chip introduced earlier this year. Nvidia won each of the six application tests for data center and edge computing systems in the second version of MLPerf Inference. These tests are an expansion beyond the initial two […]

Radio Free HPC:  MLperf Wars and AMD’s Gawdy Earnings

Summer inventory clearance days for Radio Free HPC! In this episode, we talk about the spate of MLperf benchmarks and how AMD hit it out of the park on their most recent quarterly earnings.

Inspur NF5488A5 Breaks AI Server Performance Record in Latest MLPerf Benchmarks

San Jose, Aug. 5 – In the results released last week of MLPerf AI benchmark, Inspur NF5488A5 server set a new AI performance record in the Resnet50 training task, topping the list for single server performance. MLPerf (results here) is the most influential industry benchmarking organization in the field of AI around the world. Established […]

MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”