Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Google Unveils 1st Public Cloud VMs using Nvidia Ampere A100 Tensor GPUs

Google today introduced the Accelerator-Optimized VM (A2) instance family on Google Compute Engine based on the NVIDIA Ampere A100 Tensor Core GPU, launched in mid-May. Available in alpha and with up to 16 GPUs, A2 VMs are the first A100-based offering in a public cloud, according to Google. At its launch, Nvidia said the A100, built on the company’s new Ampere architecture, delivers “the greatest generational leap ever,” according to Nvidia, enhancing training and inference computing performance by 20x over its predecessors.

DDN Data Storage in 7th-ranked Nvidia Supercomputer

DDN announced that its data infrastructure is used in the NVIDIA supercomputer that achieved the seventh position in the most recently announced TOP500 supercomputing list announced last week during ISC 2020 Digital conference. DDN AI400X all-flash systems complement the high-performance capabilities of the NVIDIA DGX A100 cluster, dubbed Selene, the largest industrial supercomputer in the United States. The […]

New, Open DPC++ Extensions Complement SYCL and C++

In this guest article, our friends at Intel discuss how accelerated computing has diversified over the past several years given advances in CPU, GPU, FPGA, and AI technologies. This innovation drives the need for an open and cross-platform language that allows developers to realize the potential of new hardware, minimizes development cost and complexity, and maximizes reuse of their software investments.

Car as ‘Computing Device’: Mercedes-Benz and Nvidia Team to Build Software-defined Vehicles for 2024

Nvidia and Mercedes-Benz today said they plan to create an in-vehicle computing system and AI infrastructure for 2024 Mercedes-Benz vehicles equipped with “upgradable automated driving functions.” The resulting cars and trucks will be capable of automated address-to-address driving of regular routes, such as commutes and repeat deliveries, according to the companies.

Purdue’s ‘Anvil’ to Be Driven by Dell, AMD ‘Milan’ CPUs, Nvidia A100 Tensor Core GPUs

Another in a series of National Science Foundation supercomputing awards has been announced, this one a $10 million funding for a system to be housed at Purdue University to support HPC and AI workloads and scheduled to enter production next year. The system, dubbed Anvil, will be built in partnership with Dell and AMD and […]

NCSA’s Upcoming $10M Delta System to Expand Use of GPUs in Scientific Workloads

Delta, a new supercomputer to be deployed before the end of 2021 at the National Center for Supercomputing Application’s (NCSA), has as part of its mission the expanded adoption of GPU-accelerated scientific computing. NCSA  Director Bill Gropp told us that that while NCSA is not new to GPUs (some years ago, staffers there configured a […]

Inspur Launches 5 New AI Servers with NVIDIA A100 Tensor Core GPUs

Inspur released five new AI servers that fully support the new NVIDIA Ampere architecture. The new servers support up to 8 or 16 NVIDIA A100 Tensor Core GPUs, with remarkable AI computing performance of up to 40 PetaOPS, as well as delivering tremendous non-blocking GPU-to-GPU P2P bandwidth to reach maximum 600 GB/s. “With this upgrade, Inspur offers the most comprehensive AI server portfolio in the industry, better tackling the computing challenges created by data surges and complex modeling. We expect that the upgrade will significantly boost AI technology innovation and applications.”

NVIDIA EGX Platform Brings Real-Time AI to the Edge

NVIDIA announced two powerful products for its EGX Edge AI platform — the EGX A100 for larger commercial off-the-shelf servers and the tiny EGX Jetson Xavier NX for micro-edge servers — delivering high-performance, secure AI processing at the edge. “Large industries can now offer intelligent connected products and services like the phone industry has with the smartphone. NVIDIA’s EGX Edge AI platform transforms a standard server into a mini, cloud-native, secure, AI data center. With our AI application frameworks, companies can build AI services ranging from smart retail to robotic factories to automated call centers.”

NVIDIA Mellanox ConnectX-6 Lx SmartNIC Accelerates Cloud and Enterprise Workloads

Today NVIDIA launched the NVIDIA Mellanox ConnectX-6 Lx SmartNIC — a highly secure and efficient 25/50 gigabit per second (Gb/s) Ethernet smart network interface controller (SmartNIC) — to meet surging growth in enterprise and cloud scale-out workloads. “ConnectX-6 Lx, the 11th generation product in the ConnectX family, is designed to meet the needs of modern data centers, where 25Gb/s connections are becoming standard for handling demanding workflows, such as enterprise applications, AI and real-time analytics.”

Paperspace Joins NVIDIA DGX-Ready Software Program

AI cloud computing Paperspace announced Paperspace Gradient is certified under the new NVIDIA DGX-Ready Software program. The program offers proven solutions that complement NVIDIA DGX systems, including the new NVIDIA DGX A100, with certified software that supports the full lifecycle of AI model development. “We developed our NVIDIA DGX-Ready Software program to accelerate AI development in the enterprise,” said John Barco, senior director of DGX software product management at NVIDIA. “Paperspace has developed a unique CI/CD approach to building machine learning models that simplifies the process and takes advantage of the power of NVIDIA DGX systems.”