Cerebras Systems: 2.5 Trillion-plus Transistors in 7nm-based 2nd Gen Wafer Scale Engine

Print Friendly, PDF & Email

Maker of the world’s largest microprocessors, Cerebras Systems, today unveiled what it said is the largest AI chip, the Wafer Scale Engine 2 (WSE-2) — successor to the first WSE introduced in 2019. The 7nm-based WSE-2 exceeds Cerebras’ previous world record with a chip that has 2.6 trillion transistors and 850,000 AI-optimized cores.

By comparison, according to Cerebras, the largest GPU has 54 billion transistors, while the WSE-2 has 123x more cores and 1,000x more high performance on-chip high memory than GPU competitors.

The WSE-2 will power the Cerebras CS-2, the company’s AI computer, which more than doubles the performance of Cerebras’ first-generation CS-1. Manufactured by Taiwan Semiconductor Manufacturing Company (TSMC) on its 7nm-node, the WSE-2 more than doubles all performance characteristics on the chip – the transistor count, core count, memory, memory bandwidth and fabric bandwidth – over the first generation WSE, Cerebras said.

“Less than two years ago, Cerebras revolutionized the industry with the introduction of WSE, the world’s first wafer scale processor,” said Dhiraj Mallik, VP Hardware Engineering, Cerebras Systems. “In AI compute, big chips are king, as they process information more quickly, producing answers in less time – and time is the enemy of progress in AI. The WSE-2 solves this major challenge as the industry’s fastest and largest AI processor ever made.”

“TSMC has long partnered with the industry innovators to manufacture advanced processors with leading performance. We are pleased with the result of our continuous collaboration with Cerebras Systems in manufacturing the Cerebras WSE-2 on our 7nm process, another extraordinary accomplishment and milestone for wafer scale development after the introduction of the Cerebras 16nm WSE less than two years ago,” said Sajiv Dalal, SVP of Business Management, TSMC North America. “TSMC’s leading-edge technology, manufacturing excellence, and rigorous attention to quality enable us to meet Cerebras’ stringent defect density requirements and support them as they continue to unleash silicon innovation.”

With every component optimized for AI work, the CS-2 delivers more compute performance at less space and less power, Cerebras said. “Depending on workload, from AI to HPC, CS-2 delivers hundreds or thousands of times more performance than legacy alternatives, and it does so at a fraction of the power draw and space,” the company said in its announcement. “A single CS-2 replaces clusters of hundreds or thousands of GPUs that consume dozens of racks, use hundreds of kilowatts of power, and take months to configure and program. At only 26 inches tall, the CS-2 fits in one-third of a standard data center rack.”

Over the past year, customers have deployed the Cerebras WSE and CS-1, including Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center (PSC) for its groundbreaking Neocortex AI supercomputer, EPCC, the supercomputing centre at the University of Edinburgh, pharmaceutical leader GlaxoSmithKline, Tokyo Electron Devices, and others.

Cerebras WSE-2

“At GSK we are applying machine learning to make better predictions in drug discovery, so we are amassing data – faster than ever before – to help better understand disease and increase success rates,” said Kim Branson, SVP, AI/ML, GlaxoSmithKline. “Last year we generated more data in three months than in our entire 300-year history. With the Cerebras CS-1, we have been able to increase the complexity of the encoder models that we can generate, while decreasing their training time by 80x. We eagerly await the delivery of the CS-2 with its improved capabilities so we can further accelerate our AI efforts and, ultimately, help more patients.”

“As an early customer of Cerebras solutions, we have experienced performance gains that have greatly accelerated our scientific and medical AI research,” said Rick Stevens, Argonne National Laboratory Associate Laboratory Director for Computing, Environment and Life Sciences. “The CS-1 allowed us to reduce the experiment turnaround time on our cancer prediction models by 300x over initial estimates, ultimately enabling us to explore questions that previously would have taken years, in mere months. We look forward to seeing what the CS-2 will be able to do with more than double that performance.”

Cerebras has won several industry awards, including the Global Semiconductor Alliances (GSA) Startup To Watch, Fast Company’s Best World Changing Ideas, Fast Company’s World’s Most Innovative Companies, IEEE Spectrum’s Emerging Technology Awards, Forbes AI 50 2020, HPCwire’s Readers’ and Editors’ Choice Awards, CBInsights AI 100, amongst many others.

“In August 2019, Cerebras delivered the Wafer Scale Engine (WSE), the only trillion-transistor processor in existence at that time,” said Linley Gwennap, principal analyst, The Linley Group. “Now the company has doubled this success with the WSE-2, which pushes the transistor count to 2.6 trillion. This is a great achievement, especially when considering that the world’s third largest chip is 2.55 trillion transistors smaller than the WSE-2.”