Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Atos Launches First Supercomputer Equipped with NVIDIA A100 GPU

Today Atos announced its new BullSequana X2415, the first supercomputer in Europe to integrate NVIDIA’s Ampere next-generation graphics processing unit architecture, the NVIDIA A100 Tensor Core GPU. This new supercomputer blade will deliver unprecedented computing power to boost application performance for HPC and AI workloads, tackling the challenges of the exascale era. The BullSequana X2415 blade will increase computing power by more than 2X and optimize energy consumption thanks to Atos’ 100% highly efficient water-cooled patented DLC (Direct Liquid Cooling) solution, which uses warm water to cool the machine.

Supermicro steps up with NVIDIA A100 GPU-Powered Systems

Today Supermicro announced two new AI systems based on NVIDIA A100 GPUs. NVIDIA A100 is the first elastic, multi-instance GPU that unifies training, inference, HPC, and analytics. “Optimized for AI and machine learning, Supermicro’s new 4U system supports eight A100 Tensor Core GPUs. The 4U form factor with eight GPUs is ideal for customers that want to scale their deployment as their processing requirements expand. The new 4U system will have one NVIDIA HGX A100 8 GPU board with eight A100 GPUs all-to-all connected with NVIDIA NVSwitch for up to 600GB per second GPU-to-GPU bandwidth and eight expansion slots for GPUDirect RDMA high-speed network cards.”

Video: NVIDIA Launches Ampere Data Center GPU

In this video, NVIDIA CEO Jensen Huang announces the first GPU based on the NVIDIA Ampere architecture, the NVIDIA A100. Their fastest GPU ever is in now in full production and shipping to customers worldwide. “NVIDIA A100 GPU is a 20X AI performance leap and an end-to-end machine learning accelerator – from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers.”