Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Liqid Teams with Inspur at GTC for Composable Infrastructure

Rich meets with Dolly Wu from Inspur and Marius Tudor from Liquid at GTC.

In this video from GTC 2018, Dolly Wu from Inspur and Marius Tudor from Liquid describe how the two companies are collaborating on Composable Infrastructure for AI and Deep Learning workloads.

At GTC, Liqid and Inspur announced a joint solution designed specifically for advanced, GPU-intensive applications and workflows. The Matrix Rack Composable Platform powered by NVIDIA GPUs will enable the scale out, sharing, accelerated performance, and dynamic composability of GPUs via fabric-based composable infrastructure. The rack-level solution delivers unparalleled infrastructure adaptability to manage emerging applications driven by artificial intelligence (AI) that require performance not possible with traditional, static data center infrastructure.

Our goal is to work with the industry’s most innovative companies to build an adaptive data center infrastructure for the advancement of AI, scientific discovery, and next-generation GPU-centric workloads,” said Sumit Puri, CEO of Liqid. “Liqid is honored to be partnering with data center leaders Inspur Systems and NVIDIA to deliver the most advanced composable GPU platform on the market with Liqid’s fabric technology.”

Bringing together the best hardware solutions available for the data center, including the Inspur i24 servers & GX4 expansion chassis, NVIDIA Tesla V100 and P100 GPUs, and Liqid Grid PCIe fabric technology, the Matrix Rack delivers a truly composable, fully scalable GPU platform.

The Matrix Rack enables disaggregated pools of GPUs to be scaled, accelerated and shared natively over a PCIe fabric, facilitating the performance from dozens of NVIDIA GPUs to be clustered and orchestrated as needed in tandem with disaggregated pools of NVMe storage, compute, and networking resources. Advanced fabric technologies like GPU peer-to-peer can deliver significantly higher GPU performance over legacy static platforms, with the ability to right-size bare-metal physical server on demand to accommodate any workload.

AI and deep learning applications will determine the direction of next-generation infrastructure design, and we believe dynamically composing GPUs will be central to these emerging platforms,” said Dolly Wu, GM and VP Inspur Systems. “We are excited to partner with NVIDIA and Liqid to deliver the market’s first mature, rack-scale solution for composable GPUs, leveraging Inspur’s leading server and storage solutions.”

The Matrix Rack Composable Platform powered by NVIDIA GPUs includes (48Ru Rack):

  • 24x Compute Nodes (Dual Intel Xeon Scalable Processors)
  • 144x U.2 Solid-State Drives (SSD), 6.4 TB per SSD
  • 24x Network Adapters (NIC), Dual 100 Gb per NIC
  • 48x NVIDIA GPUs (V100 and P100)
  • Liqid Grid (Managed PCIe Gen 3.0 Fabric) & Liqid Command Center (Software)

The Matrix Rack Composable Platform powered by NVIDIA GPUs delivers:

  • Dynamic, bare-metal GPU resource sharing, scaling, and performance acceleration
  • Cluster dozens of GPUs across multiple chasses natively over PCIe fabric
  • GPU to CPU allocation in real time to balance resources to accommodate data surges
  • Support for GPU hot plug in Windows and Linux environments
  • Advanced features including CPU bypass for “peer-to-peer” data transactions
  • AI-driven Liqid Command Center software to accelerate data center automation

GPU-accelerated computing is the engine for modern AI and HPC,” said Paresh Kharya, Group Product Marketing Manager at NVIDIA. “Composable infrastructure from Liqid and Inspur provides an innovative approach to scale GPU resources and expand the computing capabilities for applications that rely on NVIDIA GPUs.”

Sign up for our insideHPC Newsletter

 

Leave a Comment

*

Resource Links: