Published reports state that TSMC (Taiwan Semiconductor Manufacturing Company) may begin commercial production within two years of specialized supercomputer AI chips, an outgrowth of the company’s customized fabrication of the Wafer Scale Engine (WSE) developed by AI start-up Cerebras Systems.
Last August, Cerebras unveiled the WSE (price: US$2 million), which it said is the largest chip ever built. Optimized for AI workloads, WSE contains more than 1.2 trillion transistors and is 46,225 square millimeters – making it 56.7 times larger than the largest graphics processing unit, which measures 815 square millimeters and 21.1 billion transistors. Cerebras also said WSE contains 3000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth.
Then last month, we reported that the Pittsburgh Supercomputing Center had won a $5 million award from the National Science Foundation to build Neocortex, an AI supercomputer integrating Cerebras WSE technology with Hewlett Packard Enterprise’s shared memory Superdome Flex hardware.The Neocortex architecture will incorporate two Cerebras CS-1 AI servers, each powered by a WSE processor designed for faster deep learning training and inferencing.
Coverage of the TSMC news today in DigiTimes by Julian Ho and Willis Ke stated that “though demand for extremely costly supercomputer AI chips remains quite limited, TSMC plans to enter commercial production of similar chips within two years….” The key is TSMC’s ability to improve its yield rates on it its InFO_SoW (integrated fan-out system-on-wafer) IC scaling process.
As reported in DigiTimes, Nvidia’s GPU Ampere series and the Fujitsu supercomputer Fugaku, the newly ranked no. 1 system in the world, have adopted TSMC‘s Chip-on-Wafer-on-Substrate (CoWoS) packaging process.
The news fits into the larger trend of supercomputing processing becoming increasingly varied. Asked recently by this publication about future HPC trends, Dr. Jack Collins, director of the Advanced Biomedical Computing Center at the Frederick National Laboratory for Cancer Research, responded, “The answer to that is really, how do you define HPC? If HPC is the next thing out there over and above what is available to the general population, I think it’s going to get more heterogeneous and I think it’s going to get more specialized and it’s going to be not a single Cray that kind of fits everybody’s niche, but it’s going to be a bunch of different systems all hooked together.”
For further reading see coverage in ExtremeTech.