Cerebras Scores $10B Deal with OpenAI

Since Cerebras came on the scene in 2019 with its unusual dinner plate-size wafer scale GPU, there was always the potential for a break-out moment when someone or something would elevate the lightning-fast processer ….

Cerebras Raises $1.1B at $8.1B Valuation

Cerebras Systems’ dinner plate-size chip technology has been a curiosity since the company introduced the Wafer Scale Engine in 2019. But it’s become more than just a curiosity to the AI industry — and to the venture community. Today, Cerebras announced an oversubscribed $1.1 billion Series G funding round at $8.1 billion valuation. This as […]

Cerebras Reports 3,000 Tokens Per Second Inference on OpenAI gpt-oss-120b Model

Cerebras Systems today announced inference support for gpt-oss-120B, OpenAI’s first open-weight reasoning model, running at record inference speeds of 3,000 tokens per second on the Cerebras AI Inference Cloud, according to ….

DARPA Taps Cerebras and Ranovus for Military and Commercial Platform

AI compute company Cerebras Systems said it has been awarded a new contract from the Defense Advanced Research Projects Agency (DARPA) to develop a system combining their wafer scale technology with wafer scale co-packaged optics of Ottawa-based Ranovus to deliver ….

Sandia: Molecular Dynamics Simulation Record Breakers Nominated for Gordon Bell Prize

Sandia National Laboratories announced today a new speed record in molecular dynamics simulation. A collaborative research team ran simulations using the Cerebras Wafer Scale Engine (WSE) processor and “raced past the maximum speed achievable on the world’s ….

Aramco and Cerebras Sign AI MoU

SUNNYVALE, Calif. & RIYADH, Saudi Arabia – Cerebras Systems today announced the signing of a memorandum of understanding with Aramco under which they aim to bring high performance AI inference to industries, universities, and enterprises in Saudi Arabia. Aramco plans to build, train and deploy large language models using Cerebras’ CS-3 systems. Aramco’s new high-performance […]

Cerebras Claims Fastest AI Inference

AI compute company Cerebras Systems today announced what it said is the fastest AI inference solution. Cerebras Inference delivers 1,800 tokens per second for Llama3.1 8B and 450 tokens per second for Llama3.1 70B, according to the company, making it 20 times faster than GPU-based solutions in hyperscale clouds.

Cerebras: Wafer Scale Engine Outperforms Frontier Supercomputer on Molecular Dynamics Simulations

SUNNYVALE, Calif. – May 15, 2024 – Accelerated generative AI chip company Cerebras Systems, in collaboration with researchers from Sandia, Lawrence Livermore, and Los Alamos National Laboratories, said it has acheved a breakthrough in molecular dynamics (MD) simulations using the second generation Cerebras Wafer Scale Engine (WSE-2). Researchers performed atomic scale simulations at the millisecond […]

Cerebras and UAE-based G42 Announce Condor Galaxy AI Supercomputer, Offered as Cloud Service

July 20, 2023 — AI technology company Cerebras Systems with partner with G42, a UAE-based technology holding group, have announced Condor Galaxy, a cloud-based network of nine interconnected supercomputers. The first AI supercomputer on this network, Condor Galaxy 1 (CG-1), is optimized for large language models and generative AI and delivers 4 exaFLOPs of 16 […]

PSC’s Neocortex HPC Upgrades to Cerebras CS-2 AI Systems

The Neocortex high-performance AI computer at the Pittsburgh Supercomputing Center (PSC) has been upgraded with two new Cerebras CS-2 systems powered by the second-generation wafer-scale engine (WSE-2) processor. PSC said the WSE-2 doubles the system’s cores and on-chip memory as well as offering a new execution mode designed for extreme-scale deep-learning tasks, including larger model […]