Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Intel, NSF Name Winners of Wireless Machine Learning Research Funding

Intel and the National Science Foundation (NSF), joint funders of the Machine Learning for Wireless Networking Systems (MLWiNS) program, today announced recipients of awards for research projects into ultra-dense wireless systems that deliver the throughput, latency and reliability requirements of future applications – including distributed machine learning computations over wireless edge networks. Here are the […]

Car as ‘Computing Device’: Mercedes-Benz and Nvidia Team to Build Software-defined Vehicles for 2024

Nvidia and Mercedes-Benz today said they plan to create an in-vehicle computing system and AI infrastructure for 2024 Mercedes-Benz vehicles equipped with “upgradable automated driving functions.” The resulting cars and trucks will be capable of automated address-to-address driving of regular routes, such as commutes and repeat deliveries, according to the companies.

Lenovo Launches ThinkSystem Servers with GPU Support, Increased NVMe Storage

Lenovo this morning launched two new ThinkSystem servers, the SR860 V2 and SR850 V2, utilizing 3rd Gen Intel Xeon Scalable processors with Intel Deep Learning Boost, along with introduction of GPU support on the SR860 V2 (four double-wide 300W or eight single-wide GPUs). The servers also offer increased NVMe storage capacity for handling AI workloads, high end VDI deployments and data analytics.

Purdue’s ‘Anvil’ to Be Driven by Dell, AMD ‘Milan’ CPUs, Nvidia A100 Tensor Core GPUs

Another in a series of National Science Foundation supercomputing awards has been announced, this one a $10 million funding for a system to be housed at Purdue University to support HPC and AI workloads and scheduled to enter production next year. The system, dubbed Anvil, will be built in partnership with Dell and AMD and […]

NCSA’s Upcoming $10M Delta System to Expand Use of GPUs in Scientific Workloads

Delta, a new supercomputer to be deployed before the end of 2021 at the National Center for Supercomputing Application’s (NCSA), has as part of its mission the expanded adoption of GPU-accelerated scientific computing. NCSA  Director Bill Gropp told us that that while NCSA is not new to GPUs (some years ago, staffers there configured a […]

From Forty Days to Sixty-five Minutes without Blowing Your Budget Thanks to Gigaio Fabrex

In this sponsored post, Alan Benjamin, President and CEO of GigaIO, discusses how the ability to attach a group of resources to one server, run the job(s), and reallocate the same resources to other servers is the obvious solution to a growing problem: the incredible rate of change of AI and HPC applications is accelerating, triggering the need for ever faster GPUs and FPGAs to take advantage of the new software updates and new applications being developed.

Inspur Launches 5 New AI Servers with NVIDIA A100 Tensor Core GPUs

Inspur released five new AI servers that fully support the new NVIDIA Ampere architecture. The new servers support up to 8 or 16 NVIDIA A100 Tensor Core GPUs, with remarkable AI computing performance of up to 40 PetaOPS, as well as delivering tremendous non-blocking GPU-to-GPU P2P bandwidth to reach maximum 600 GB/s. “With this upgrade, Inspur offers the most comprehensive AI server portfolio in the industry, better tackling the computing challenges created by data surges and complex modeling. We expect that the upgrade will significantly boost AI technology innovation and applications.”

NVIDIA EGX Platform Brings Real-Time AI to the Edge

NVIDIA announced two powerful products for its EGX Edge AI platform — the EGX A100 for larger commercial off-the-shelf servers and the tiny EGX Jetson Xavier NX for micro-edge servers — delivering high-performance, secure AI processing at the edge. “Large industries can now offer intelligent connected products and services like the phone industry has with the smartphone. NVIDIA’s EGX Edge AI platform transforms a standard server into a mini, cloud-native, secure, AI data center. With our AI application frameworks, companies can build AI services ranging from smart retail to robotic factories to automated call centers.”

Paperspace Joins NVIDIA DGX-Ready Software Program

AI cloud computing Paperspace announced Paperspace Gradient is certified under the new NVIDIA DGX-Ready Software program. The program offers proven solutions that complement NVIDIA DGX systems, including the new NVIDIA DGX A100, with certified software that supports the full lifecycle of AI model development. “We developed our NVIDIA DGX-Ready Software program to accelerate AI development in the enterprise,” said John Barco, senior director of DGX software product management at NVIDIA. “Paperspace has developed a unique CI/CD approach to building machine learning models that simplifies the process and takes advantage of the power of NVIDIA DGX systems.”

Video: Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze Research Breakthroughs

Nick Nystrom from the Pittsburgh Supercomputing Center gave this talk at the Stanford HPC Conference. “The Artificial Intelligence and Big Data group at Pittsburgh Supercomputing Center converges Artificial Intelligence and high performance computing capabilities, empowering research to grow beyond prevailing constraints. The Bridges supercomputer is a uniquely capable resource for empowering research by bringing together HPC, AI and Big Data.”