Sorry, no content matched your criteria.
Sorry, no content matched your criteria.
With today’s growing adoption in the AI data center of liquid cooling – essential for controlling AI cooling costs and energy usage – minimizing risk includes the use of high-quality connectors. Liquid cooling connectors are crucial ….
Today, every high-performance computing (HPC) workload running globally faces the same crippling issue: Congestion in the network.
Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now.
In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion, specifically:
– Why today’s network architectures are not a sustainable approach to HPC workloads
– How HPC workload congestion and latency issues are directly tied to the network architecture
– Why a direct interconnect network architecture minimizes congestion and tail latency
Dec. 9, 2025: Cornellis Networks and Supermicro have announ ced Supermicro’s FlexTwin server platforms are now validated with Cornelis’ CN5000 networking for AI and HPC clusters. Cornelis’ CN5000 400Gbps networking platform is designed to address communication bottlenecks by providing data movement between servers — a critical factor in large AI and HPC deployments. Supermicro’s FlexTwin […]
