GigaIO Wins Most Innovative New Product Award for Big Data

Today GigaIO announced that the company’s FabreX technology has been selected as the winner of Connect’s Most Innovative New Product Award for Big Data. The Most Innovative New Product Awards is an annual competition that recognizes San Diego industry leaders for their groundbreaking contributions to technology and life sciences sectors.

FabreX is a cutting-edge network architecture that drives the performance of data centers and high-performance computing environments. Featuring a unified, software-driven composable infrastructure, the fabric dynamically assigns resources to facilitate streamlined application deployment, meeting today’s growing demands of data-intensive programs such as Artificial Intelligence and Deep Learning. FabreX adheres to industry standard PCI Express (PCIe) technology and integrates computing, storage and input/output (IO) communication into a single-system cluster fabric for flawless server-to-server communication. Optimized with GPU Direct RDMA (GDR) and NVMe-oF, FabreX facilitates direct memory access by a server to the system memories of all other servers in the cluster, enabling native host-to- host communication to create the industry’s first in-memory network.

FabreX was developed to address today’s high volume of network data. The technology allows users to build large, high-performance server solutions and configure a simplified multi-server network without sacrificing performance,” says Alan Benjamin, CEO of GigaIO. “Each year, Connect’s Most Innovative New Product Awards draws the most distinguished innovators in San Diego, so it is an amazing honor to win the Big Data category. GigaIO would like to extend a warm thank you to Connect’s CEO Mike Krenn and the rest of his team.”

In this video from SC19, Alan Benjamin from GigaIO describes how the company’s FabreX Architecture integrates computing, storage ans I/O into a single-system cluster PCIe-based fabric for flawless server-to-server communication and true cluster scale networking.

Sign up for our insideHPC Newsletter