Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Inspur Launches MX1 Server with support for Multiple AI Chips

At SC19 in Denver, Inspur launched the MX1 AI System. With support for a variety of OAM (OCP Accelerator Module)-compliant AI chips, the MX1 is the first OAM AI system that supports different types of AI chips from multiple manufacturers on a single server.

Currently, with the increasing demand of data center users for AI computing performance, hundreds of companies around the world have invested in the R&D and production of new AI chips, and the trend of AI chip diversification has become increasingly prevalent. However, due to the different ASIC solutions adopted by various manufacturers in AI development, AI accelerators are incompatible with each other in terms of interfaces, interconnections, and protocols. As a result, data center users face major obstacles with various hardware and tool kits in AI infrastructures.

Inspur is committed to promoting the establishment of specifications in the AI industry, and hopes to promote the development of AI chips and technologies through an open and common specification of AI infrastructure. This vision is highly consistent with OCP, the global open computing community. As the cornerstone of the next generation hyperscale accelerated computing platforms, the OAM standard established by the OCP community defines a unified interface for AI accelerators that supports multiple architectures such as ASIC, GPU, and FPGA, and provides innovative designs in physical form factor, power supply, connectors, definition of pins, and system architecture.

Inspur actively participates in the development of the OAM specification and took the lead in designing and developing the MX1, the world’s first OAM-compliant open AI acceleration system. MX1 adopts technologies such as high bandwidth and dual power supply, and is compatible with a wide variety of OAM-compliant AI accelerators. MX1 features a total interconnection bandwidth of up to 224Gbps and provides two interconnect topologies — fully-connected and Hybrid Cube Mesh (HCM) — so that users can flexibly design on-chip interconnection schemes according to the needs of on-chip communication for different neural network models. MX1 has two independent power supply schemes, 12V and 54V, with the maximum powers of 300W and 450W-500W respectively, which can support various AI accelerators with high power consumption. The single-node design of MX1 supports eight AI accelerators and supports up to 32 accelerators with high-speed interconnection scale-up extensions to accommodate the computing needs of ultra-large-scale deep neural network models.

Inspur is a leading AI computing solutions vendor and the world’s largest GPU AI server supplier, with more than 50% market share of AI servers in China. Working closely with leading AI companies in systems and applications, Inspur helps them achieve significant performance gains in NLP, image recognition, video analysis, search recommendation algorithms, intelligent networking, etc. Inspur shares AI computing resources and algorithms with industrial partners to enable them to accelerate the utilization of AI.

See our complete coverage of SC19

Sign up for our insideHPC Newsletter

 

Leave a Comment

*

Resource Links: