Inspur Launches Server for Mobile Liquid Cooling Cluster

Data center infrastructure provider Inspur Information has announced a plate liquid-cooling 2U 4-node server, the i24M5-LC, that the company said is optimized for large-scale water-cooling server data centers with PUE<1.2. Inspur said the i24M5-LC, built for HPC and cloud data centers, adopts a liquid cooling design based on....

Inspur Introduces Leading Designs of NVIDIA A100 Tensor Core GPU Servers for AI and HPC

In this special guest feature, our friends over at Inspur write about how the company is delivering new servers that address the most demanding performance from companies that are implementing AI and ML into their workflows.  Reducing the Total Cost of Ownership (TCO) while increasing the productivity of their teams is critical for CIOs and Line of Business leadership.

Inspur Launches 5 New AI Servers with NVIDIA A100 Tensor Core GPUs

Inspur released five new AI servers that fully support the new NVIDIA Ampere architecture. The new servers support up to 8 or 16 NVIDIA A100 Tensor Core GPUs, with remarkable AI computing performance of up to 40 PetaOPS, as well as delivering tremendous non-blocking GPU-to-GPU P2P bandwidth to reach maximum 600 GB/s. “With this upgrade, Inspur offers the most comprehensive AI server portfolio in the industry, better tackling the computing challenges created by data surges and complex modeling. We expect that the upgrade will significantly boost AI technology innovation and applications.”

Inspur InCloud OpenStack Sets Records on New SPEC Cloud Tests

The Inspur InCloud OpenStack has set new records for four key indicators in the latest SPEC Cloud IaaS test to lead the world in technology performance, scalability, application instances, and provisioning time. “The test results show that InCloud OpenStack can efficiently complete the scheduling of various loads such as I/O and computing, and its performance growth shows leading linear scalability. Therefore, it is fully capable of meeting the cloud requirements of users, whether they are traditional business requirements or cloud requirements for innovative applications such as big data and artificial intelligence.”

Inspur Takes 3rd Place in Auto Deep Learning Finals

A team from Inspur ranked in the Top 3 at the recent Auto Deep Learning Finals. “Inspur’s leading core technology used in this competition has been applied to Inspur AutoML Suite, an automatic machine learning AI algorithm platform product. AutoML Suite realizes a one-stop automatic generation model based on GPU cluster visualization operations. It has three major automation engines: modeling AutoNAS, hyper-parameter adjustment AutoTune, and model compression AutoPrune, to provide powerful support for computing power.”

Inspur Storage System achieves record performance on SPC Benchmark 1

An Inspur storage system achieved world record performance and top two ranking in a recent SPC Benchmark 1 performance test. The SPC Benchmark 1 results are a valuable reference for the selection of storage systems for businesses critical applications like OLTP systems, databases systems, and server applications. “To achieve these results, the AS5600G2 adopted iTurbo acceleration engine technology from Inspur and used the four core algorithms of intelligent data path acceleration — intelligent multi-core scheduling, intelligent hot and cold data stream separation, and iMASP random to sequential transformation technology.”

Fast Track your AI Workflows

In this special guest feature, our friends over at Inspur write that for new workloads that are highly compute intensive, accelerators are often required. Accelerators can speed up the computation and allow for AI and ML algorithms to be used in real time. Inspur is a leading supplier of solutions for HPC and AI/ML workloads.

The Role of Middleware in Optimizing Vector Processing

A new whitepaper from NEC X delves into the world of unstructured data and explores how vector processors and their optimization software can help solve the challenges of wrangling the ever-growing volumes of data generated globally. “In short, vector processing with SX-Aurora TSUBASA will play a key role in changing the way big data is handled while stripping away the barriers to achieving even higher performance in the future.”

The Role of Middleware in Optimizing Vector Processing

This whitepaper delves into the world of unstructured data and describes some of the technologies, especially vector processors and their optimization software, that play key roles in solving the problems that arise as result of the accelerating amount of data generated globally.

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning

The Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and three other companies. “It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.”