HPC News Bytes 20241118: On the Scene at SC24, TSMC and the CHIPS Act, Cerebras at Sandia

Good morning to you from SC24 in Atlanta. Here’s a quick (5:33) run-through of recent developments in the world of HPC-AI, including: The SC24 conference starts today; TSMC, the CHIPS Act and semiconductor demand; Sandia National Labs teams with Cerebras.

Sandia: Molecular Dynamics Simulation Record Breakers Nominated for Gordon Bell Prize

Sandia National Laboratories announced today a new speed record in molecular dynamics simulation. A collaborative research team ran simulations using the Cerebras Wafer Scale Engine (WSE) processor and “raced past the maximum speed achievable on the world’s ….

Aramco and Cerebras Sign AI MoU

SUNNYVALE, Calif. & RIYADH, Saudi Arabia – Cerebras Systems today announced the signing of a memorandum of understanding with Aramco under which they aim to bring high performance AI inference to industries, universities, and enterprises in Saudi Arabia. Aramco plans to build, train and deploy large language models using Cerebras’ CS-3 systems. Aramco’s new high-performance […]

HPC News Bytes Podcast 20240902: Nvidia and Blackwell Hit Limits, Revved-Up AI Inference and Oncoming Regulations, HPC in Russia

Happy Labor Day to you!  From the world of HPC-AI, we offer a rapid (5:57) review of recent news, including: Nvidia and Blackwell push technology’s limits, big AI inference numbers ….

Cerebras Claims Fastest AI Inference

AI compute company Cerebras Systems today announced what it said is the fastest AI inference solution. Cerebras Inference delivers 1,800 tokens per second for Llama3.1 8B and 450 tokens per second for Llama3.1 70B, according to the company, making it 20 times faster than GPU-based solutions in hyperscale clouds.

HPC News Bytes 20240617: Controlling AI, Big Tech Invests in Taiwan, A Step for Specialty AI Chips, Alternative Energy for AI Data Centers, Apple Silicon

A happy summer solstice week to you! The HPC-AI ecosystem produced a raft of news and insights over the past week, here’s a rapid (7:59) review: New paper in Science on controlling AI from an all-star lineup of strategists, Big Tech invests in Taiwan, Chip Wars; AI chip landscape, specialty AI chips take a step forward, AI data centers seek new energy sources, Apple silicon in the data center?

Cerebras to Collaborate with Dell, and AMD, on AI Model Training

Specialty AI processor company Cerebras Systems today announced a collaboration with Dell Technologies to deliver AI compute infrastructure for generative AI. It includes a new memory storage solution powered by Dell and AMD ….

HPC News Bytes 20240520: Expanding Exascale Club; Nvidia ‘Paints It Green’ at ISC; Specialty AI Chip News; Student Cluster Competition

A happy May morning to you! In the aftermath of ISC 2024 in Hamburg, we offer this quick (6:21) run-through of recent news from the world of HPC-AI, including ….

Cerebras: Wafer Scale Engine Outperforms Frontier Supercomputer on Molecular Dynamics Simulations

SUNNYVALE, Calif. – May 15, 2024 – Accelerated generative AI chip company Cerebras Systems, in collaboration with researchers from Sandia, Lawrence Livermore, and Los Alamos National Laboratories, said it has acheved a breakthrough in molecular dynamics (MD) simulations using the second generation Cerebras Wafer Scale Engine (WSE-2). Researchers performed atomic scale simulations at the millisecond […]

ALCF Announces Online AI Testbed Training Workshops

The Argonne Leadership Computing Facility has announced a series of training workshops to introduce users to the novel AI accelerators deployed at the ALCF AI Testbed. The four individual workshops will introduce participants to the architecture and software of the SambaNova DataScale SN30, the Cerebras CS-2 system, the Graphcore Bow Pod system, and the GroqRack […]