Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Inspur Unveils GX4 Ai Accelerator

Today Inspur unveiled GX4, a new flexible and high scalability AI accelerating box at ISC 2017. The GX4 is able to achieve the decoupling coprocessor resources including CPU and GPU, Xeon Phi and FPGA, expand the computing power on demand, and provide highly flexible support to various AI applications in GPU-accelerated computing. This is another innovative effort followed by the release of the ultra-high density AI supercomputer AGX-2 last month at GTC 2017 in California.

According to Jay Zhang from Inspur, the GX4 sufficiently addresses the major differences in the AI deep-learning training model, using a flexible expansion method to support different levels of AI training models, and effectively lowering energy consumption and delays. The GX4 provides a flexible and innovative AI computing solution for companies and research organizations engaged in artificial intelligence across the world.

GX4 makes decoupling and restructuring of the coprocessors and CPU computing resources possible. It enables coprocessors with different architectures, such as GPU, Xeon Phi and FPGA to meet the needs of various AI application scenarios, such as AI cloud, deep-learning model training, and online inference. What’s more important, GX4 expands computational efficiency by connecting standard rack servers with GPU computing expansion modules, overcomes the obstacle that GPU servers need to adjust entire system and motherboard design to change computing topologies . GX4’s independent computing acceleration module design significantly increases system deployment flexibility, offers high expansion performance from 2 to16 cards, and provides flexible topology changes by changing the connection between server and expansion module, making computing infrastructures and upper-level application better matched, and achieving the best performance of the AI computing clusters.

The GX4 overcomes the expansion limitation of 8 GPU cards of general AI computing equipment and provides better stand-alone computing performance. Each GX4 supports 4 accelerating cards in 2U form factor, and one head node can connect up to 4 GX4s, achieving 16 accelerating cards in one acceleration computing pool.

Inspur says it is is dedicated to developing intelligent computing business which focuses on cloud computing, big data and deep leaning, and this has been regarded as the most important business development for the next decade. In recent years, Inspur has become the largest AI computing platform provider in China. Besides, Inspur’s AI solutions’ market share has reached to 60% in China and 80% in China’s BIG 3 IT companies, Baidu, Alibaba and Tencent, and it has been widely- used in smart-voice, smart-image and other applications by companies such as iFlytek and Face++.

Sign up for our insideHPC Newsletter

Resource Links: