Video: GigaIO Optimizes FabreX Fabric with GPU Sharing and Composition Technology

Print Friendly, PDF & Email

Alan Benjamin (right) is CEO and President of GigaIO.

In this video from SC19, Alan Benjamin from GigaIO describes how the company’s FabreX Architecture integrates computing, storage ans I/O into a single-system cluster PCIe-based fabric for flawless server-to-server communication and true cluster scale networking.

At the show, GigaIO announced the FabreX implementation of GPU Direct RDMA (GDR) technology, accelerating communication for GPU storage devices with the industry’s highest throughput and lowest latency.

It is imperative for the supercomputing community to have a system architecture that can handle the compute-intensive workloads being deployed today,” says Alan Benjamin, CEO of GigaIO. “Our team has created that solution with FabreX, which offers unparalleled composability and the lowest hardware latency on the market. Moreover, incorporating GDR technology only enhances the fabric’s cutting-edge capabilities – delivering accelerated performance and increased scalability for truly effortless composing. Combining our new GDR support with our previously announced NVMe-oF capabilities, we are excited to bring real composition without compromise to our customers.”

FabreX adheres to industry standard PCI Express (PCIe) technology, integrating computing, storage and input/output (IO) communication into a single-system cluster fabric for flawless server-to-server communication and true cluster scale networking. The fabric supports all hardware and software resources so users can construct a cluster ideally suited for their needs. Additionally, all software environments, frameworks and applications can build on an unprecedented hardware latency of 200n seconds point to point, enabling virtualization across both compute and storage to deliver dramatically reduced cost, reduced power consumption and superior overall performance. Optimized with GDR, FabreX facilitates direct memory access by a server to the system memories of all other servers in the cluster, enabling native host-to- host communication to create the industry’s first in-memory network.

See our complete coverage of SC19

Sign up for our insideHPC Newsletter