Today GigaIO announced that the company’s FabreX technology has been selected as the winner of Connect’s Most Innovative New Product Award for Big Data. The Most Innovative New Product Awards is an annual competition that recognizes San Diego industry leaders for their groundbreaking contributions to technology and life sciences sectors. “FabreX is a cutting-edge network architecture that drives the performance of data centers and high-performance computing environments. Featuring a unified, software-driven composable infrastructure, the fabric dynamically assigns resources to facilitate streamlined application deployment, meeting today’s growing demands of data-intensive programs such as Artificial Intelligence and Deep Learning. FabreX adheres to industry standard PCI Express (PCIe) technology and integrates computing, storage and input/output (IO) communication into a single-system cluster fabric for flawless server-to-server communication. Optimized with GPU Direct RDMA (GDR) and NVMe-oF, FabreX facilitates direct memory access by a server to the system memories of all other servers in the cluster, enabling native host-to- host communication to create the industry’s first in-memory network.”
GigaIO Optimizes FabreX Architecture with GPU Sharing and Composition Technology
Today GigaIO announced the FabreX implementation of GPU Direct RDMA (GDR) technology, accelerating communication for GPU storage devices with the industry’s highest throughput and lowest latency. “It is imperative for the supercomputing community to have a system architecture that can handle the compute-intensive workloads being deployed today,” says Alan Benjamin, CEO of GigaIO. “Our team has created that solution with FabreX, which offers unparalleled composability and the lowest hardware latency on the market. Moreover, incorporating GDR technology only enhances the fabric’s cutting-edge capabilities – delivering accelerated performance and increased scalability for truly effortless composing. Combining our new GDR support with our previously announced NVMe-oF capabilities, we are excited to bring real composition without compromise to our customers.”
High Speed Data Capture for AI on the Fly Edge Applications
In many AI applications, transporting large amounts of data back to a remote datacenter is impractical and undesirable. With AI on the Fly, the entire AI workflow resides at the edge at the data source. One Stop Systems’s Tim Miller explores how high performance scalable data acquisition is a fundamental and enabling component of this emerging new paradigm.
GigaIO Extends Next-Generation Network to Storage Systems
Today GigaIO introduced the FabreX implementation of Non-Volatile Memory Express over Fabrics (NVMe-oF) architecture, streamlining NVMe network communication and large-scale storage sharing with industry-leading low latency and high-bandwidth features. “With FabreX and NVMe-oF, storage located in a server, directly attached to a server and attached across a network delivers identical performance, allowing customers to choose their preferred storage architectures and experience the same quality of service to meet desired service level objectives.”
GigaIO Steps Up with PCIe Gen 4 Interconnect for HPC
In this video from ISC 2019, Marc Lehrer from GigaIO describes the company’s innovative HPC interconnect technology based on PCIe Gen 4. “For your most demanding workloads, you want time to solution. The GigaIO hyper-performance network breaks the constraints of old architectures, opening up new configuration possibilities that radically reduces system cost and protect your investment by enabling you to easily adopt new compute or business processes.”
OSS Introduces World’s First PCIe Gen 4 Backplane at GTC
Today One Stop Systems introduced the world’s first PCIe Gen 4 backplane. “Delivering the high performance required by edge applications necessitates PCIe interconnectivity traveling on the fast data highway between high-speed processors, NVMe storage and compute accelerators using GPUs or application specific FPGAs,” continued Cooper. “‘AI on the Fly’ applications naturally demand this capability, like the government mobile shelter application we announced earlier this year.”