San Francisco – September 22, 2021 – Today, MLCommons, an open engineering consortium, released new results for MLPerf Inference v1.1, the organization’s machine learning inference performance benchmark suite. MLPerf Inference measures the performance of applying a trained machine learning model to new data for a wide variety of applications and form factors, and optionally includes […]
Nvidia Romps in Latest MLPerf AI Benchmark Results
What has become Nvidia’s regular romp through MLPerf AI benchmarks continued with today’s release of the latest performance measurements across a range of inference workloads, including computer vision, medical imaging, recommender systems, speech recognition and natural language processing. The latest benchmark round received submissions from 17 organizations and includes 1,994 performance and 862 power efficiency […]
GIGABYTE Joins MLCommons to Accelerate the Machine Learning Community
Taipei, Taiwan, December 22nd 2020 – GIGABYTE Technology, (TWSE: 2376), a maker of high-performance servers and workstations, today announced GIGABYTE as one of the founding members of MLCommons, an open engineering consortium with the goal of accelerating machine learning with benchmarking, large-scale open data sets, and best practices that are community-driven. In 2018, a group […]
Nvidia Dominates MLPerf AI Benchmark Competition Again
Nvidia said it has extended its lead on the MLPerf Benchmark for AI inference with the company’s A100 GPU chip introduced earlier this year. Nvidia won each of the six application tests for data center and edge computing systems in the second version of MLPerf Inference. These tests are an expansion beyond the initial two […]
Radio Free HPC: MLperf Wars and AMD’s Gawdy Earnings
Summer inventory clearance days for Radio Free HPC! In this episode, we talk about the spate of MLperf benchmarks and how AMD hit it out of the park on their most recent quarterly earnings.
Inspur NF5488A5 Breaks AI Server Performance Record in Latest MLPerf Benchmarks
San Jose, Aug. 5 – In the results released last week of MLPerf AI benchmark, Inspur NF5488A5 server set a new AI performance record in the Resnet50 training task, topping the list for single server performance. MLPerf (results here) is the most influential industry benchmarking organization in the field of AI around the world. Established […]
MLPerf-HPC Working Group seeks participation
In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”
NVIDIA Tops MLPerf AI Inference Benchmarks
Today NVIDIA posted the fastest results on new benchmarks measuring the performance of AI inference workloads in data centers and at the edge — building on the company’s equally strong position in recent benchmarks measuring AI training. “NVIDIA topped all five benchmarks for both data center-focused scenarios (server and offline), with Turing GPUs providing the highest performance per processor among commercially available entries.”
MLPerf Releases Over 500 Inference Benchmarks
Today the MLPerf consortium released over 500 inference benchmark results from 14 organizations. “Having independent benchmarks help customers understand and evaluate hardware products in a comparable light. MLPerf is helping drive transparency and oversight into machine learning performance that will enable vendors to mature and build out the AI ecosystem. Intel is excited to be part of the MLPerf effort to realize the vision of AI Everywhere,” stated Dr Naveen Rao, Corp VP Intel, GM AI Products.
Google Cloud and NVIDIA Set New Training Records on MLPerf v0.6 Benchmark
Today the MLPerf effort released results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. MLPerf is a consortium of over 40 companies and researchers from leading universities, and the MLPerf benchmark suites are rapidly becoming the industry standard for measuring machine learning performance. “We are creating a common yardstick for training and inference performance,” said Peter Mattson, MLPerf General Chair.