Improving AI Inference Performance with GPU Acceleration in Aerospace and Defense

The aerospace/defense industry often must solve mission-critical problems as they arise while also planning and designing for the rigors of future workloads. Technology advancements let aerospace/defense agencies gain the benefits of AI, but it’s essential to understand these advancements and the infrastructure requirements for AI training and inference.

Ansys and AMD Team on Simulation of Large Structural Mechanical Models

PITTSBURGH — August 24, 2022 – Ansys (NASDAQ: ANSS) said today that Ansys Mechanical is one of the first commercial finite element analysis (FEA) programs supporting AMD Instinct accelerators, the newest data center GPUs from AMD. The AMD Instinct accelerators are designed for data centers and supercomputers to help solve complex problems. To support the AMD Instinct accelerators, Ansys […]

Verge.io Unveils Virtualized GPU Computing

ANN ARBOR, Mich.—August 16, 2022 — Verge.io, with a mission to offer a simpler way to virtualize data centers, has added new features to its Verge-OS software designed to give users the performance of GPUs as virtualized, shared resources. The intent is to create a cost-effective, simple and flexible way to perform GPU-based machine learning, remote […]

Argonne’s Polaris Supercomputer Deployed for Scientific Research

Argonne National Laboratory announced that the Polaris supercomputer, a 44-petaflops HPE system powered by AMD CPUs and NVIDIA GPUs, is now open to the research community. Researchers can apply for computing time through the ALCF’s Director’s Discretionary allocation program. Details on the system can be found here. The system, housed at the Argonne Leadership Computing Facility […]

Lenovo Brings a Decade of Liquid Cooling Experience to the Faster, Denser, Hotter HPC Systems of the Future

[SPONSORED CONTENT]  HPC systems customers (and vendors) are in permanent pursuit of more compute power with equal or greater node density. But with that comes more power consumption, greater heat generation and rising cooling costs. Because of this, the IT business – with a boost from the HPC and hyperscale segments – is spiraling up […]

NVIDIA Announces GA of AI Enterprise 2.1

NVIDIA today announced the general availability of NVIDIA AI Enterprise 2.1., an updated version of its AI and data analytics software suite designed to help enterprises deploy and scale AI applications across bare metal, virtual, container, and cloud environments. NVIDIA said AI Enterprise 2.1 offers advanced data science with the latest NVIDIA RAPIDS and low […]

Overcome Form Factor and Field Limitations with AI/HPC Workloads on the Edge

In this sponsored post, our friends over at Silicon Mechanics discuss how form factor, latency, and power can all be key limitations, but find out how key advancements in technology will allow higher performance at the edge. For this discussion, the edge means any compute workloads taking place outside of both cloud and traditional on-prem data centers.

MLPerf: Latest Results Highlight ‘More Capable ML Training’

Open engineering consortium MLCommons has released new results from MLPerf Training v2.0, which measures how fast various platforms train machine learning models. The organizations said the latest MLPerf Training results “demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems….” As it has done with previous […]

Scalable Inferencing for Autonomous Trucking

In this sponsored post, Tim Miller, Vice President, Product Marketing, One Stop Systems, discusses autonomous trucking and that to achieve AI Level 4 (no driver) in the vehicles, powerful AI inference hardware supporting many different inferencing engines operating and coordinating simultaneously is required.

Cerebras Claims Record for Largest AI Models Trained on a Single Device

SUNNYVALE, Calif., June 22, 2022 — AI computing company Cerebras Systems today announced that  a single Cerebras CS-2 system is able to train models with up to 20 billion parameters on – something not possible on any other single device, according to the company. By enabling a single CS-2 to train these models, Cerebras said […]