Paris – February 11, 2025: MLCommons, in partnership with the AI Verify Foundation, today released v1.1 of AILuminate, incorporating new French language capabilities into its first-of-its-kind AI safety benchmark. The new update – which was announced at the Paris AI Action Summit – marks the next step towards a global standard for AI safety and comes as […]
MLCommons Releases AILuminate LLM v1.1 with French Language Capabilities
MLCommons Launches LLM Safety Benchmark
Dec. 4, 2024 — MLCommons today released AILuminate, a safety test for large language models. The v1.0 benchmark – which provides a series of safety grades for the most widely-used LLMs – is the first AI safety benchmark designed collaboratively by AI researchers and industry experts, according to MLCommons. It builds on MLCommons’ track record […]
@HPCpodcast: MLCommons’ David Kanter on AI Benchmarks and What They’re Telling Us
Special guest David Kanter of ML Commons joins us to discuss the critical importance AI performance metrics. In addition to the well-known MLPerf benchmark for AI training, ML Commons provides ….
Nvidia, AMD, Intel and Google Debut Chips in MLPerf Inference Benchmark for GenAI
Today, MLCommons announced new results for its industry-standard MLPerf Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and ….
Updated MLPerf AI Inference Benchmark Results Released
Today, MLCommons announced new results from its MLPerf Inference v4.0 benchmark suite with machine learning (ML) system performance benchmarking. To view the results for MLPerf Inference v4.0 visit the Datacenter and Edge results pages. The MLPerf Inference….
MLPerf Training and HPC Benchmark Show 49X Performance Gains in 5 Years
MLCommons said the results highlight performance gains of up to 2.8X compared to 5 months ago and 49X over the first results five years ago, “reflecting the tremendous rate of innovation in systems for machine learning,” the organizations said.
MLCommons: MLPerf Results Show AI Performance Gains
Today ML Commons announced new results from two industry-standard MLPerf benchmark suites: Training v3.0, which measures the performance of training machine learning models, and Tiny v1.1, which measures how quickly a trained neural network can process new data for extremely low-power devices in the smallest form factors. To view the results and to find additional […]
An AI-Flavored Set of HPC Predictions for 2023
We recently read an interesting article in Wired magazine on neuromorphic computing stating that neuroscientists increasingly regard the human brain is a “prediction machine,” that people, as a rule, are in a constant state of anticipation, extrapolation and inference. In HPC, situated as it is at the forward edge of compute power, data analysis and […]
MLCommons: Latest MLPerf AI Benchmark Results Show Machine Learning Inference Advances
SAN FRANCISCO – September 8, 2022 – Today, the open engineering consortium MLCommons announced results from MLPerf Inference v2.1, which analyzes the performance of inference — the application of a trained machine learning model to new data. Inference allows for the intelligent enhancement of a vast array of applications and systems. Here are the results and […]
MLPerf: Latest Results Highlight ‘More Capable ML Training’
Open engineering consortium MLCommons has released new results from MLPerf Training v2.0, which measures how fast various platforms train machine learning models. The organizations said the latest MLPerf Training results “demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems….” As it has done with previous […]