It’s often said that supercomputers of a few decades ago pack less power than today’s smart watches. Now we have a company, Tiiny AI Inc., claiming to have built the world’s smallest personal AI supercomputer that can run a 120-billion-parameter large language model on-device — without cloud connectivity, servers or GPUs.
Search Results for: AI
Personal ‘AI Supercomputer’ Runs 120B-Parameter LLMs On-Device, Tiiny AI Says
Siemens and GlobalFoundries to Collaborate on AI-Driven Chip Manufacturing
Siemens and GlobalFoundries (GF) said today they have entered a collaboration to use their AI capabilities to enhance semiconductor manufacturing and advanced industries . In a memorandum of understanding, the companies say they will focus on automation technologies for semiconductor fabrication, electrification, digital solutions and software ranging from chip development to product lifecycle management. This […]
DOE Awards $320M for Genesis Mission, AI for Science
DOE said the awards will begin building the integrated American Science and Security Platform, a discovery engine designed to double the productivity and impact of American science and engineering investments within a decade.
Couchbase Announces GA of AI Platform with NVIDIA Integration for Agentic AI
SAN JOSE – Dec 10, 2025 – AI database developer platform company Couchbase today announced the general availability of Couchbase AI Services, a suite of capabilities for building and deploying agentic AI applications. By bringing together data and models in a single unified platform, Couchbase said it eliminates the complexity and fragmentation that has kept AI […]
Exxact Partners with VDURA Storage for AI and HPC Users
Fremont, CA – GPU server maker Exxact Corporation today announced a partnership with VDURA, an HPC and AI data infrastructure company, to deliver storage for modern GPU-accelerated compute. For engineers, researchers, and AI teams, slow or inconsistent storage is often the hidden cause of stalled progress and underutilized GPUs. Exxact and VDURA are working to eliminate those bottlenecks […]
Siemens and nVent to Release Liquid Cooling and Power Reference Architecture for AI Data Centers
Siemens and nVent are collaborating to develop a liquid cooling and power reference architecture for hyperscale AI workloads. The new joint architecture developed by Siemens and nVent is designed to help build 100 MW hyperscale AI data centers built to house large-scale, liquid-cooled AI infrastructure, such as NVIDIA GB200 NVL72 systems. It presents a Tier […]
Cornelis Networks and Supermicro Collaborate on Integrated AI-HPC Offering
Dec. 9, 2025: Cornellis Networks and Supermicro have announ ced Supermicro’s FlexTwin server platforms are now validated with Cornelis’ CN5000 networking for AI and HPC clusters. Cornelis’ CN5000 400Gbps networking platform is designed to address communication bottlenecks by providing data movement between servers — a critical factor in large AI and HPC deployments. Supermicro’s FlexTwin […]
HPC News Bytes 20251208: Marvell’s Celestial AI-Optical I/O Buy, ASML’s U.S. EUV Laser Competitor, HPC and Parkinson’s Research at SDSC
A good December day to you! The world of HPC-AI generated a notably colorful array of developments last week, here’s a quick (9:12) run-through of recent news: Marvell in AI with Celestial AI ….
DDN Introduces AI Data Architecture, Addresses NAND Shortages
Chatsworth, CA — AI data platform provider DDN announced new capabilities across its EXA and Infinia product lines desitned to enable organizations to enhance AI performance and GPU utilization even as global NAND shortages drive SSD prices up by 75–125 percent. These advancements uniquely position DDN as the only vendor capable of maintaining AI factory […]
Report: AI Back-End Networks Continue Shift to Ethernet
REDWOOD CITY, Calif. – Dec. 5, 2025 – According to a recently published report from industry analyst firm Dell’Oro Group, Ethernet accounted for more than two-thirds of data center switch sales in AI back-end networks both during the quarter and across the first three quarters of the year — up from less than half in […]













