MLCommons Releases MLPerf AI Training v5.1 Results

Today, MLCommons announced new results for the MLPerf Training v5.1 benchmark suite, highlighting the rapid evolution and increasing richness of the AI ecosystem as well as significant performance improvements from new generations of systems.

MLPerf Releases Storage v2.0 Benchmark Results

San Francisco, CA — MLCommons has announced results for its MLPerf Storage v2.0 benchmark suite, designed to measure the performance of storage systems for machine learning workloads in an architecture-neutral, representative, and reproducible manner. According to MLCommons, the results show that storage systems performance continues to improve rapidly, with tested systems serving roughly twice the […]

MLCommons Releases AILuminate LLM v1.1 with French Language Capabilities

Paris – February 11, 2025: MLCommons, in partnership with the AI Verify Foundation, today released v1.1 of AILuminate, incorporating new French language capabilities into its first-of-its-kind AI safety benchmark. The new update – which was announced at the Paris AI Action Summit – marks the next step towards a global standard for AI safety and comes as […]

HPC News Bytes 20250203: DeepSeek Lessons, Intel Reroutes GPU Roadmap, LANL and OpenAI for National Security, Nuclear Reactors for Google Data Centers

The HPC-AI world was upended last week by DeepSeek AI benchmark numbers, as the dust settles we offer commentary on what it may, at this stage, mean: Five lessons from DeepSeek, Intel GPU rack scale architecture ….

@HPCpodcast: MLCommons’ David Kanter on AI Benchmarks and What They’re Telling Us

Special guest David Kanter of ML Commons joins us to discuss the critical importance AI performance metrics. In addition to the well-known MLPerf benchmark for AI training, ML Commons provides ….