Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Inspur Launches 5 New AI Servers with NVIDIA A100 Tensor Core GPUs

Inspur released five new AI servers that fully support the new NVIDIA Ampere architecture. The new servers support up to 8 or 16 NVIDIA A100 Tensor Core GPUs, with remarkable AI computing performance of up to 40 PetaOPS, as well as delivering tremendous non-blocking GPU-to-GPU P2P bandwidth to reach maximum 600 GB/s. “With this upgrade, Inspur offers the most comprehensive AI server portfolio in the industry, better tackling the computing challenges created by data surges and complex modeling. We expect that the upgrade will significantly boost AI technology innovation and applications.”

Perlmutter supercomputer to include more than 6000 NVIDIA A100 processors

NERSC is among the early adopters of the new NVIDIA A100 Tensor Core GPU processor announced by NVIDIA this week. More than 6,000 of the A100 chips will be included in NERSC’s next-generation Perlmutter system, which is based on an HPE Cray Shasta supercomputer that will be deployed at Lawrence Berkeley National Laboratory later this year. “Nearly half of the workload running at NERSC is poised to take advantage of GPU acceleration, and NERSC, HPE, and NVIDIA have been working together over the last two years to help the scientific community prepare to leverage GPUs for a broad range of research workloads.”

NVIDIA A100 Tensor Core GPUs come to Oracle Cloud

Oracle is bringing the newly announced NVIDIA A100 Tensor Core GPU to its Oracle Gen 2 Cloud regions. “Oracle is enhancing what NVIDIA GPUs can do in the cloud,” said Vinay Kumar, vice president, product management, Oracle Cloud Infrastructure. “The combination of NVIDIA’s powerful GPU computing platform with Oracle’s bare metal compute infrastructure and low latency RDMA clustered network is extremely compelling for enterprises. Oracle Cloud Infrastructure’s high-performance file server solutions supply data to the A100 Tensor Core GPUs at unprecedented rates, enabling researchers to find cures for diseases faster and engineers to build safer cars.”

Atos Launches First Supercomputer Equipped with NVIDIA A100 GPU

Today Atos announced its new BullSequana X2415, the first supercomputer in Europe to integrate NVIDIA’s Ampere next-generation graphics processing unit architecture, the NVIDIA A100 Tensor Core GPU. This new supercomputer blade will deliver unprecedented computing power to boost application performance for HPC and AI workloads, tackling the challenges of the exascale era. The BullSequana X2415 blade will increase computing power by more than 2X and optimize energy consumption thanks to Atos’ 100% highly efficient water-cooled patented DLC (Direct Liquid Cooling) solution, which uses warm water to cool the machine.

Supermicro steps up with NVIDIA A100 GPU-Powered Systems

Today Supermicro announced two new AI systems based on NVIDIA A100 GPUs. NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics. “Optimized for AI and machine learning, Supermicro’s new 4U system supports eight A100 Tensor Core GPUs. The 4U form factor with eight GPUs is ideal for customers that want to scale their deployment as their processing requirements expand. The new 4U system will have one NVIDIA HGX A100 8 GPU board with eight A100 GPUs all-to-all connected with NVIDIA NVSwitch for up to 600GB per second GPU-to-GPU bandwidth and eight expansion slots for GPUDirect RDMA high-speed network cards.”

Video: NVIDIA Launches Ampere Data Center GPU

In this video, NVIDIA CEO Jensen Huang announces the first GPU based on the NVIDIA Ampere architecture, the NVIDIA A100. Their fastest GPU ever is in now in full production and shipping to customers worldwide. “NVIDIA A100 GPU is a 20X AI performance leap and an end-to-end machine learning accelerator – from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers.”

New NVIDIA DGX A100 Packs Record 5 Petaflops of AI Performance for Training, Inference, and Data Analytics

Today NVIDIA unveiled the NVIDIA DGX A100 AI system, delivering 5 petaflops of AI performance and consolidating the power and capabilities of an entire data center into a single flexible platform. “DGX A100 systems integrate eight of the new NVIDIA A100 Tensor Core GPUs, providing 320GB of memory for training the largest AI datasets, and the latest high-speed NVIDIA Mellanox HDR 200Gbps interconnects.”