Inspur Information Introduces 3 AI Servers to Support NVIDIA A30 and NVIDIA A10 GPUs

Print Friendly, PDF & Email

SAN JOSE – April 12, 2021 – At NVIDIA GTC 2021, Inspur Information, an IT infrastructure solutions provider, introduced three GPU servers (NF5468M6, NF5468A5 and NF5280M6), which support the latest NVIDIA A30, NVIDIA A10 and NVIDIA A100 Tensor Core GPUs to meet the demand for AI computing power in multiple computing scenarios, including running virtualized infrastructure for the NVIDIA AI Enterprise software suite.

Inspur AI servers are among the first to support NVIDIA Ampere architecture-based GPUs and has obtained NVIDIA-Certified System status supporting the NVIDIA EGX platform for next-generation AI.

Inspur is committed to delivering users leading AI solutions and innovations by continuously adapting to evolving AI computing scenarios,” said Liu Jun, VP and GM, AI and HPC, Inspur Information. “Through our agile product design and development capabilities, Inspur is among the first to launch and mass-produce AI servers supporting NVIDIA A30 and NVIDIA A10 GPUs, providing more diverse product options to meet the needs of different industries, scales and scenarios.”

“Whether enterprises are running AI, data science or simulation applications, performance and efficiency are critical to these advanced data center workloads,” said Justin Boitano, vice president and general manager, Enterprise and Edge Computing at NVIDIA. “The new Inspur AI servers with NVIDIA A100, A30 and A10 GPUs provide customers with powerful new NVIDIA-Certified Systems to meet the demands of today’s demanding AI and graphics applications.”

An Inspur server powered with A100 GPUs is in use at Northwestern University Feinberg School of Medicine, where the institute is pilot testing high-performance data pipelines to enable deep learning experiments without having to deal with separate, costly copies of legacy health system enterprise data. With the state-of-the-art performance of Inspur’s A100-based training platform, the pilot program has provided significant performance improvements not just in model training but in overall project delivery. With a 10x improvement in training speed, and 100x improvement in data prep, Northwestern Medicine can rapidly prototype, iterate, and ultimately deploy deep learning models directly into the healthcare environment.

“With our high-speed data pipes and the Inspur GPU server (with 8x NVIDIA HGX A100 GPUs), we can quickly iterate and use the state-of-the-art in AI to help our patients,” said Dr. Mozziyar Etemadi, anesthesiologist and Chief Data Engineer of the Northwestern Institute for Augmented Intelligence in Medicine (I-AIM).

Furthermore, Inspur servers supporting the NVIDIA Ampere architecture are also widely used in deep learning, image recognition, natural language understanding, intelligent recommendation and other intelligent scenarios in various industries, helping enterprise users accelerate AI innovation.

Inspur’s GPU Servers Supporting A30, A10 and A100:

NF5468M6: ultra-flexible for AI workloads, supports 2x Intel 3rd Gen Intel Xeon Scalable processor and 8x NVIDIA A100/A40/A30 GPUs, 16x NVIDIA A10 GPUs, or 20x NVIDIA T4 GPUs; supports up to 12x 3.5-inch hard drives for large local storage in a 4U chassis; flexibly adapts to latest AI accelerators and smartNICs and has the unique function of switching topologies with one click for various AI applications including AI cloud, IVA(Intelligent Video Analysis), video processing, etc.

NF5468A5: versatile AI server featuring 2x AMD Rome/Milan CPUs and 8x NVIDIA A100/A40/A30 GPUs; N+N redundancy design enables 8x 350W AI accelerators in full-speed operations for superior reliability; the CPU-to-GPU non-blocking design allows interconnection without the PCIe switch communication, achieving faster commutation efficiency.

NF5280M6: purpose-built for all scenarios, with 2x Intel 3rd Gen intel Xeon Scalable processor and 4x NVIDIA A100/A40/A30/A10 GPUs or 8x NVIDIA T4 Tensor Core GPUs in 2U chassis, capable of long-term stable operation at 45°C. The NF5280M6 is equipped with the latest PFR/SGX technology and trusted security module design, which is suitable for demanding AI applications.

Also, Inspur announced the Inspur M6 AI servers support NVIDIA BlueField- 2 DPUs. Moving forward, Inspur plans to integrate NVIDIA BlueField-2 DPUs into its next-generation AI servers, which will enable faster and more efficient management of users and clusters as well as interconnected data access, for scenarios like AI, big data analysis, cloud computing, and virtualization.

Inspur is the world’s leading AI server vendor with a rich array of AI computing products and works closely with AI customers to help achieve high order-of-magnitude performance improvements for AI applications in speech, semantics, image, video, search, and more.