GigaIO Extends Next-Generation Network to Storage Systems

Print Friendly, PDF & Email

Today GigaIO introduced the FabreX implementation of Non-Volatile Memory Express over Fabrics (NVMe-oF) architecture, streamlining NVMe network communication and large-scale storage sharing with industry-leading low latency and high-bandwidth features.

NVMe is an open interface for accessing non-volatile storage media attached via Peripheral Component Interconnect Express (PCIe). This scalable interface is designed for use in PCIe-based systems, facilitating expedited access to direct-attached solid-state drives (SSD). Similarly, NVMe-oF is used for network communication between multiple hosts, extending NVMe architecture access fabric technologies to enable expedited access between servers and network attached storage systems.

FabreX is the next generation PCIe-based network, and with our new NVMe-oF support, we bring the industry’s lowest latency and highest throughput to data centers for quick and efficient data transfers,” says Alan Benjamin, CEO of GigaIO. “With FabreX and NVMe-oF, storage located in a server, directly attached to a server and attached across a network delivers identical performance, allowing customers to choose their preferred storage architectures and experience the same quality of service to meet desired service level objectives.”

GigaIO’s implementation of FabreX with NVMe-oF allows devices and data to remain on native PCIe networks and NVMe protocols, respectively, to sustain optimum performance. In contrast, other NVMe-oF options convert device connections between PCIe and Ethernet or Fibre Channel and InfiniBand fabrics, and data paths between NVMe and RoCE or iWARP protocol. These conversions create unavoidable inefficiencies that significantly lower potential performance. For example, while NVMe-oF storage latency is customarily measured in microseconds, when used with Ethernet or Fibre Channel fabrics, it is best measured in nanoseconds, specifically when using NVMe-oF with FabreX native PCIe networks.

The benefits of a native NVMe-oF implementation with FabreX include unparalleled low latency and high throughput, extended direct data exchange and reduced central processing unit (CPU) resource utilization. Additionally, FabreX with NVMe-oF allows direct data placement between NVMe drives and remote host systems memory, eliminating all buffers and copies. This enables rapid communication between local host memory and remote NVMe subsystems, outperforming all other fabrics on the market.

With the explosion of data that customers are now gathering, and the subsequent need to analyze that data and turn it into meaningful information to drive their business, companies tell us they have a burning need for higher performance and better utilization of their compute, network and storage investments,” continues Alan Benjamin. “Often, the biggest complaint is the legacy networks that tie their systems together. The truth is the legacy networks were never designed for the rack-scale systems people now need and use, and while they are trying their best to respond, a fresh approach to creating these systems is needed.”

FabreX is a cutting-edge network architecture for compute and storage infrastructure. The platform delivers hyper-performance with unparalleled low latency and flexibility, allowing data centers to boost their throughput while driving down both their capital and operating costs for an attractive total cost of ownership.

In this video from ISC 2019, Marc Lehrer from GigaIO describes the company’s innovative HPC interconnect technology based on PCIe Gen 4.

FabreX for NVMe-oF will become a standard part of the GigaIO Leader FX/OS software package and will be released for general availability by the end of September 2019. For sales information, please contact info@gigaio.com. To learn more about FabreX with NVMe-oF, download the white paper “New Frontiers in NVMe-oF“.

GigaIO will demonstrate FabreX with NVMe-oF on August 6-8 at the 2019 Flash Memory Summit, booth #1045 in Santa Clara.

Sign up for our insideHPC Newsletter