Just ahead of Nvidia’s big GTC conference some weeks ago, San Jose-based Aviz Networks announced a suite of software-based management and monitoring tools that support NVIDIA’s Spectrum-X Ethernet networking platform. Spectrum-X originated with NVIDIA’s acquisition in 2019 of Mellanox, which knows a thing or two about high-performance networks.
Spectrum-X’s HPC-class performance makes it well-suited for compute- and data-intensive AI infrastructures, and Aviz’s support for the network is central to the company’s mission of making “Networks for AI and AI for Networks.” The company’s ONES (Open Networking Enterprise Suite) delivers AI network orchestration for Spectrum- X multi-tenant AI fabrics . Built with an agent-less architecture and containerized microservices, Aviz ONES is vendor-agnostic and designed to simplify network design, deployment, monitoring and scaling for heterogenous AI-driven environments.
Aviz was founded to modernize and transform networking software solutions while addressing the demands of data centers, edge and GPU networks as they scale and integrate AI. By being vendor-agnostic, Aviz provides enterprises with the flexibility of hardware choices and operational control.
These qualities are critical as AI infrastructures become bigger, more complex and more heterogenous – and as organizations implementing AI at scale strive to avoid vendor lock-in. Aviz designed ONES to respond to these needs: from a single command line using ONES, network managers can monitor Spectrum-X Ethernet networks.
ONES also supports the Sonic operating system and can manage multiple vendors’ switches, including those from Nvidia, Arista, Cisco and white box suppliers.
We sat down with Aviz Network’s Co-founder and CEO Vishal Shukla to discuss Aviz’s vision. The man has the entrepreneur’s characteristic passion for and encyclopedic knowledge of the technology arena he competes in. According to Shukla, Aviz and its mission are resonating with Fortune 1000 companies (he can’t name names for now) seeking more open, flexible and manageable network infrastructures.

Vishal Shukla, Aviz Networks
Shukla began his career in 2004 working in R&D at Cisco, and after stints at IBM, Mellanox, Nvidia and MobilIron, he took the plunge in early 2022 and co-founded Aviz, building a data-centric networking stack for open, cloud, AI-first networks – what he and his Aviz colleagues call “Networking 3.0.”
“The DNA of Aviz Network actually goes back to the work I did at Mellanox between 2016 to 2019, until the company got acquired by Nvidia,” Shukla told us. “The idea was that at on one side, there was disaggregation of the networks going on. And on the other side, you had hyperscalers using the open source network operating system, they developed their own in-house end-to-end networking stack for managing it. And customers started asking, ‘Hey, if Microsoft and other hyperscalers can do it and you’re helping them, then why can’t you do it for us?’ So that was the point.”
Networks for “Big AI,” i.e. AI at scale, have unique and highly demanding requirements. Alan Weckel at technology industry analyst firm 650 Group recently reported that, “We can’t just use traditional networking. 2024 showed this with the rapid growth in InfiniBand, significant enhancements and purpose-built Ethernet products, and the start in deployment for scale-up networks. Each network element is designed for a specific task within the AI cluster.”
Weckel stated that both training and inference need specifically designed networks to maximize the user/application experience and monetize investments in GPUs/XPUs. “Suboptimal networking can waste billions of dollars in processor cycles or require expensive restarts,” he said.
Weckel also reported on a new development in AI factory compute: “…. we saw scale-up networks escape the server enclosure. NVIDIA’s NVL72 allowed us to get rack-level scale-up for the first time beyond specialized supercomputers.… Based on 650 Group’s 4Q’24 Networking AI report, we are projecting scale-up networking to more than double in 2025 and exceed $10B in 2028. We project NVLink to be the most common technology but also forecast UALink, PCIe, and Ethernet.”
As for scale-out infrastructure, Weckel stated that it’s “…rapidly moving towards Ethernet and is the key driver of 800G growth in 2025. While InfiniBand remains a key technology, by the end of 2025, Ethernet will be the dominant technology for scale-out, and we will start to see the early 1.6T ramp.”
The new world of scale-out/up AI networks is highly complex, even daunting. That’s where Aviz’s ONES comes in. Its support for NVIDIA Spectrum-X is designed to streamline the AI fabric lifecycle — from Day 0 planning and deployment to Day 2 operations.
ONES is designed to automate network provisioning for lossless fabrics, provides detailed telemetry for Spectrum-X congestion control mechanisms, ensuring optimized GPU utilization. The goal: enable enterprises to gain predictable AI performance, which of course is crucial for scaling AI workloads efficiently.
“Lightning-fast transformation in groundbreaking technologies such as agentic AI highlight how crucial it is for every business to prioritize networking innovation to gain an edge,” said Gilad Shainer, SVP of networking, NVIDIA, when ONES was launched. “The NVIDIA Spectrum-X networking platform enables innovators like Aviz to provide an Ethernet-based multi-tenant AI networking solution that maximizes GPU performance while simplifying operations for enterprises.”