New Inspur AI Server Supports Eight NVIDIA V100 Tensor Core GPUs

Print Friendly, PDF & Email

Today Inspur announced that their new NF5488M5 high-density AI server supports eight NVIDIA V100 Tensor Core GPUs in a 4U form factor.

Inspur has been committed to providing world-class AI computing products and solutions to AI users worldwide through innovative design,” said Jun Liu, Inspur general manager of AI and HPC. “The rapid development of AI keeps increasing the requirements for computing performance and flexibility of AI infrastructure. The NF5488M5 help users shorten AI model development cycles, and accelerate AI technology innovation and application development.”

The Inspur NF5488M5 is designed to facilitate a variety of deep-learning and high-performance computing applications, including voice recognition, video analysis and intelligent customer service.


  • Extreme Performance. Eight NVIDIA Tesla V100 Tensor Core 32GB GPUs with 5,120 Tensor Cores, provide up to 1 PFlops of AI computing performance. The option of two 28-core CPUs to provide top-level, general-purpose computing performance, and 6 TB of persistent memory for high-speed data access
  • Flexible & Ergonomic. The server design fits a broad range of data center power and a space-conserving environment, especially for power-constrained racks and features flexible GPU cluster expansion over PCIe fabric.
  • Power efficiency. The server is designed to operate on 54VDC, a more power-efficient voltage for GPUs.
  • Thermal management. A multi-layer heat dissipation design and intelligent PID adjustment that provide industry-leading thermal management and control

NVIDIA’s GPU-accelerated computing has transformed AI and HPC,” said Paresh Kharya, director of Product Marketing at NVIDIA. “Inspur has efficiently innovated computing systems based on the latest NVIDIA Tensor Core GPUs, and the new NF5488M5 will help AI and HPC users worldwide break through their computational bottlenecks.”

Visit Inspur at Booth #1111 at NVIDIA’s GPU Technology Conference.

Sign up for our insideHPC Newsletter