[SPONSORED GUEST CONTENT] Scheduled Ethernet is emerging as a viable alternative to InfiniBand for AI networking. Why? Because of its ability to offer comparable performance but with greater flexibility and cost-effectiveness.
Re-Engineering Ethernet for AI Fabric
[SPONSORED GUEST ARTICLE] For years, InfiniBand has been the go-to networking technology for high-performance computing (HPC) and AI workloads due to its low latency and lossless transport. But as AI clusters grow to thousands of GPUs and demand open, scalable infrastructure, the industry is shifting. Leading AI infrastructure providers are increasingly moving ….
Re-Engineering Ethernet for AI Fabric
Ethernet wasn’t built with AI in mind. While cost-effective and ubiquitous, its best-effort, packet-based nature creates challenges in AI clusters… But fabric-scheduled Ethernet transforms Ethernet into a predictable, lossless, scalable fabric – ideal for AI. It uses cell spraying and virtual output queuing ….
Re-Engineering Ethernet for AI Fabric
Ethernet wasn’t built with AI in mind. While cost-effective and ubiquitous, its best-effort, packet-based nature creates challenges in AI clusters… But fabric-scheduled Ethernet transforms Ethernet into a predictable, lossless, scalable fabric – ideal for AI. It uses cell spraying and virtual output queuing ….
At ISC 2025: DriveNets Talks Ethernet-based AI Networking
In this interview, Kikozashvili looks at DriveNets’ AI Ethernet solution that is used as a back-end network fabric for large GPU clusters and storage networking solution and how it supports the high-performance Ethernet alternative to InfiniBand.
Seeking Ethernet Alternative to InfiniBand? Start with Performance!
[SPONSORED GUEST ARTICLE] When it comes to AI and HPC workloads, networking is critical. While this is well known already, the impact your networking fabric performance has on parameters like job completion time can ….
Ethernet-based AI Cluster Reference Guide
When building large-scale AI GPU clusters for training or inference, the backend network should be high-performance, lossless, and predictable to ensure maximum GPU utilization. This is hard to achieve when using Ethernet for the back-end network. This guide showcases a high-level reference design for an 8,192 GPU cluster, describing how it can be achieved with […]









