GIGABYTE Joins MLCommons to Accelerate the Machine Learning Community

Taipei, Taiwan, December 22nd 2020 – GIGABYTE Technology, (TWSE: 2376), a maker of high-performance servers and workstations, today announced GIGABYTE as one of the founding members of MLCommons, an open engineering consortium with the goal of accelerating machine learning with benchmarking, large-scale open data sets, and best practices that are community-driven. In 2018, a group […]

Nvidia Dominates MLPerf AI Benchmark Competition Again

Nvidia said it has extended its lead on the MLPerf Benchmark for AI inference with the company’s A100 GPU chip introduced earlier this year. Nvidia won each of the six application tests for data center and edge computing systems in the second version of MLPerf Inference. These tests are an expansion beyond the initial two […]

Radio Free HPC:  MLperf Wars and AMD’s Gawdy Earnings

Summer inventory clearance days for Radio Free HPC! In this episode, we talk about the spate of MLperf benchmarks and how AMD hit it out of the park on their most recent quarterly earnings.

Inspur NF5488A5 Breaks AI Server Performance Record in Latest MLPerf Benchmarks

San Jose, Aug. 5 – In the results released last week of MLPerf AI benchmark, Inspur NF5488A5 server set a new AI performance record in the Resnet50 training task, topping the list for single server performance. MLPerf (results here) is the most influential industry benchmarking organization in the field of AI around the world. Established […]

MLPerf-HPC Working Group seeks participation

In this special guest feature, Murali Emani from Argonne writes that a team of scientists from DoE labs have formed a working group called MLPerf-HPC to focus on benchmarking machine learning workloads for high performance computing. “As machine learning (ML) is becoming a critical component to help run applications faster, improve throughput and understand the insights from the data generated from simulations, benchmarking ML methods with scientific workloads at scale will be important as we progress towards next generation supercomputers.”

NVIDIA Tops MLPerf AI Inference Benchmarks

Today NVIDIA posted the fastest results on new benchmarks measuring the performance of AI inference workloads in data centers and at the edge — building on the company’s equally strong position in recent benchmarks measuring AI training. “NVIDIA topped all five benchmarks for both data center-focused scenarios (server and offline), with Turing GPUs providing the highest performance per processor among commercially available entries.”

MLPerf Releases Over 500 Inference Benchmarks

Today the MLPerf consortium released over 500 inference benchmark results from 14 organizations. “Having independent benchmarks help customers understand and evaluate hardware products in a comparable light. MLPerf is helping drive transparency and oversight into machine learning performance that will enable vendors to mature and build out the AI ecosystem. Intel is excited to be part of the MLPerf effort to realize the vision of AI Everywhere,” stated Dr Naveen Rao, Corp VP Intel, GM AI Products.

Google Cloud and NVIDIA Set New Training Records on MLPerf v0.6 Benchmark

Today the MLPerf effort released results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. MLPerf is a consortium of over 40 companies and researchers from leading universities, and the MLPerf benchmark suites are rapidly becoming the industry standard for measuring machine learning performance. “We are creating a common yardstick for training and inference performance,” said Peter Mattson, MLPerf General Chair.

New MLPerf Benchmark Measures Machine Learning Inference Performance

Today a consortium involving over 40 leading companies and university researchers introduced MLPerf Inference v0.5, the first industry standard machine learning benchmark suite for measuring system performance and power efficiency. “Our goal is to create common and relevant metrics to assess new machine learning software frameworks, hardware accelerators, and cloud and edge computing platforms in real-life situations,” said David Kanter, co-chair of the MLPerf inference working group. “The inference benchmarks will establish a level playing field that even the smallest companies can use to compete.”