University of Edinburgh Installs Cerebras AI Cluster

The University of Edinburgh has announced the installation of an AI cluster consisting of four CS-3s using Cerebras’s 3rd generation of Wafer Scale Engine processors Operated by the Edinburgh Parallel Computing Centre (EPCC) at the university’s Edinburgh International Data Facility, the new cluster follows previous Cerebras CS-1 and CS-2 systems. It is part of the […]

HPC News Bytes 20250407: Tariffs and the Technology Industry, Intel-TSMC Deal?, DARPA Taps Wafer Scale, Sandia and Laser-Based Cooling, Optical I/O News

Good April day to you! It was a wild week for more than the HPC-AI sector last week, here’s a brief (7:39) look at some key developments: U.S. tariffs, the technology sector and advanced chips, Intel-TSMC ….

DARPA Taps Cerebras and Ranovus for Military and Commercial Platform

AI compute company Cerebras Systems said it has been awarded a new contract from the Defense Advanced Research Projects Agency (DARPA) to develop a system combining their  wafer scale technology with wafer scale co-packaged optics of Ottawa-based Ranovus to deliver ….

HPC News Bytes 20241118: On the Scene at SC24, TSMC and the CHIPS Act, Cerebras at Sandia

Good morning to you from SC24 in Atlanta. Here’s a quick (5:33) run-through of recent developments in the world of HPC-AI, including: The SC24 conference starts today; TSMC, the CHIPS Act and semiconductor demand; Sandia National Labs teams with Cerebras.

Sandia: Molecular Dynamics Simulation Record Breakers Nominated for Gordon Bell Prize

Sandia National Laboratories announced today a new speed record in molecular dynamics simulation. A collaborative research team ran simulations using the Cerebras Wafer Scale Engine (WSE) processor and “raced past the maximum speed achievable on the world’s ….

Aramco and Cerebras Sign AI MoU

SUNNYVALE, Calif. & RIYADH, Saudi Arabia – Cerebras Systems today announced the signing of a memorandum of understanding with Aramco under which they aim to bring high performance AI inference to industries, universities, and enterprises in Saudi Arabia. Aramco plans to build, train and deploy large language models using Cerebras’ CS-3 systems. Aramco’s new high-performance […]

HPC News Bytes Podcast 20240902: Nvidia and Blackwell Hit Limits, Revved-Up AI Inference and Oncoming Regulations, HPC in Russia

Happy Labor Day to you!  From the world of HPC-AI, we offer a rapid (5:57) review of recent news, including: Nvidia and Blackwell push technology’s limits, big AI inference numbers ….

Cerebras Claims Fastest AI Inference

AI compute company Cerebras Systems today announced what it said is the fastest AI inference solution. Cerebras Inference delivers 1,800 tokens per second for Llama3.1 8B and 450 tokens per second for Llama3.1 70B, according to the company, making it 20 times faster than GPU-based solutions in hyperscale clouds.

HPC News Bytes 20240617: Controlling AI, Big Tech Invests in Taiwan, A Step for Specialty AI Chips, Alternative Energy for AI Data Centers, Apple Silicon

A happy summer solstice week to you! The HPC-AI ecosystem produced a raft of news and insights over the past week, here’s a rapid (7:59) review: New paper in Science on controlling AI from an all-star lineup of strategists, Big Tech invests in Taiwan, Chip Wars; AI chip landscape, specialty AI chips take a step forward, AI data centers seek new energy sources, Apple silicon in the data center?

Cerebras to Collaborate with Dell, and AMD, on AI Model Training

Specialty AI processor company Cerebras Systems today announced a collaboration with Dell Technologies to deliver AI compute infrastructure for generative AI. It includes a new memory storage solution powered by Dell and AMD ….