Inspur Launches Open AI Computing Solution

X-MAN 4.0 Open Accelerator Infrastructure

Today Inspur announced a pair of AI-technology driven open computing systems designed to simplify customer deployments.

New system include:

  • The X-MAN 4.0, developed with Baidu, is the world’s first OAI (Open Accelerator Infrastructure) compliant and liquid cooling rack-scale AI computing product optimized specifically for deep neural network applications.
  • The Inspur OAI UBB system is a 21-inch Full-Rack OAM solution delivering efficiency, flexibility and management. The OAI specification—led by Baidu, Facebook, and Microsoft in the OCP (Open Compute Project) community—unifies the technical specifications of the accelerator module and simplifies the design complexity of the AI accelerator system, thereby shortening time to market for the hardware system.

X-MAN 4.0 provides powerful performance with strong scalability and interconnect technology

X-MAN 4.0 is a full-rack AI computing product providing performance, flexibility, cost and other advantages over traditional graphics processing unit (GPU) servers. Not only does the system include eight AI accelerators in a single box and scale to 32 AI accelerators per rack, but also the boxes are interconnected through QSFP-DD cables to minimize communication latency across nodes. X-MAN 4.0 is the first in Baidu’s series of full-rack AI computing products to introduce OAM-compliant accelerators; besides, accelerator resources can be specified in a software-defined manner. Together, these features mean flexible, multi-vendor support for supporting AI applications of different workloads. UBB boards with various topologies can be configured directly and flexibly according to the different application requirements.

OAI UBB system delivers breakthrough efficiency, flexibility and management

OAI UBB system

The newly launched OAI UBB system also delivers breakthrough efficiency, flexibility and management. The 21-inch Full-Rack OAI solution provides simplified inter-module communication to scale up and input/output bandwidth to scale out to support disparate network architectures through OAM direct connection. Inspur offers two out of the three available OCP interconnect topologies: Hybrid Cube Mesh (HCM) and Fully Connected (FC).

Inspur continues to pioneer the technologies that enable next-generation AI applications,” said Peter Peng, Inspur Group senior vice president. “As a member of the OCP, Open 19 and ODCC global open computing standards organizations, Inspur has always been an active promoter of open source technology to help users build open data centers. We work to facilitate cooperation among all key open standards to help our customers accelerate business innovation and more efficiently and effectively bring about next-generation AI applications.”

Sign up for our insideHPC Newsletter