Genesis Mission: DOE Announces Agreements with 24 Organizations

The U.S. Department of Energy announced agreements with 24 organizations interested in collaborating to advance the Genesis Mission, a national effort to use the power of artificial intelligence to support discovery science, national security and energy innovation. The organizations that have signed memorandums of understanding (MOUs) as of today have either expressed interest to DOE […]

‘Cerebras for Nations’ Initiative Designed to Accelerate Sovereign AI

SUNNYVALE, Calif. — AI compute company Cerebras Systems today announced the “Cerebras for Nations” global program to help governments build, accelerate, and scale their sovereign AI initiatives. Under the initiative, Cerebras will engage with international partner governments and their private sector datacenter, cloud, and AI ecosystems to advance three pillars of sovereign AI: 1) Co-design […]

Cerebras Raises $1.1B at $8.1B Valuation

Cerebras Systems’ dinner plate-size chip technology has been a curiosity since the company introduced the Wafer Scale Engine in 2019. But it’s become more than just a curiosity to the AI industry — and to the venture community. Today, Cerebras announced an oversubscribed $1.1 billion Series G funding round at $8.1 billion valuation. This as […]

Cerebras Reports 3,000 Tokens Per Second Inference on OpenAI gpt-oss-120b Model

Cerebras Systems today announced inference support for gpt-oss-120B, OpenAI’s first open-weight reasoning model, running at record inference speeds of 3,000 tokens per second on the Cerebras AI Inference Cloud, according to ….

AI Inference: Meta Teams with Cerebras on Llama API

Meta has teamed with Cerebras on AI inference in Meta’s new Llama API, combining  Meta’s open-source Llama models with inference technology from Cerebras. Developers building on the Llama 4 Cerebras model in the API can expect speeds up to 18 times faster than traditional GPU-based solutions ….

AI Inference: Meta Collaborates with Cerebras on Llama API

Sunnyvale, CA — Meta has teamed with Cerebras on AI inference in Meta’s new Llama API, combining  Meta’s open-source Llama models with inference technology from Cerebras. Developers building on the Llama 4 Cerebras model in the API can expect speeds up to 18 times faster than traditional GPU-based solutions, according to Cerebras. “This acceleration unlocks […]

News Bytes 20250407: Tariffs and the Technology Industry, Intel-TSMC Deal?, DARPA Taps Wafer Scale, Sandia and Laser-Based Cooling, Optical I/O News

Good April day to you! It was a wild week for more than the HPC-AI sector last week, here’s a brief (7:39) look at some key developments: U.S. tariffs, the technology sector and advanced chips, Intel-TSMC ….

University of Edinburgh Installs Cerebras AI Cluster

The University of Edinburgh has announced the installation of an AI cluster consisting of four CS-3s using Cerebras’s 3rd generation of Wafer Scale Engine processors Operated by the Edinburgh Parallel Computing Centre (EPCC) at the university’s Edinburgh International Data Facility, the new cluster follows previous Cerebras CS-1 and CS-2 systems. It is part of the […]

HPC News Bytes 20250407: Tariffs and the Technology Industry, Intel-TSMC Deal?, DARPA Taps Wafer Scale, Sandia and Laser-Based Cooling, Optical I/O News

Good April day to you! It was a wild week for more than the HPC-AI sector last week, here’s a brief (7:39) look at some key developments: U.S. tariffs, the technology sector and advanced chips, Intel-TSMC ….

DARPA Taps Cerebras and Ranovus for Military and Commercial Platform

AI compute company Cerebras Systems said it has been awarded a new contract from the Defense Advanced Research Projects Agency (DARPA) to develop a system combining their wafer scale technology with wafer scale co-packaged optics of Ottawa-based Ranovus to deliver ….