Federated GPU Infrastructure for AI Workflows

[Sponsored Guest Article] With the explosion of use cases such as Generative AI and ML Ops driving tremendous demand for the most advanced GPUs and accelerated computing platforms, there’s never been a better time to explore the “as-a-service” model to help get started quickly.  What could take months of shipping delays and massive CapEx investments can be yours on demand….

HPC and AI Workloads Drive Storage System Design

Many organizations are tied to outdated storage systems that cannot meet HPC and AI workload needs. Designing high‑throughput, highly scalable HPC storage systems require expert planning and configuration. The Dell Validated Designs for HPC Storage solution offers a way to quickly upgrade antiquated storage….

Improving Product Quality with AI-based Video Analytics: HPE, NVIDIA and Relimetrics Automate Quality Control in European Manufacturing Facility

Manufacturers are using the power of AI and video analytics to enable better quality control and traceability of quality issues, bringing them one step closer to achieving zero defects and reducing the downstream impacts of poor….

PNY Now Offers NVIDIA RTX 6000 Ada Generation for High Performance Computing (HPC) Workloads

The latest generation of graphics processing units (GPUs) from NVIDIA, based on their Ada Lovelace architecture, is optimized for high performance computing (HPC) workloads. The NVIDIA RTX™ 6000 Ada Generation, available from PNY,  is designed….

Intel Launches Intel Agilex 7 FPGAs

March 6, 2023 — Intel has launched new FPGAs, the Agilex 7 FPGAs with F-Tile, equipped with what the company said are the fastest FPGA transceivers on the market. Designed to address bandwidth-intensive environments such asdata centers and high-speed networks, the FPGAs deliver up to 116 gigabits per second (Gbps) and hardened 400 gigabit Ethernet […]

ClearML Certified to Run NVIDIA AI Enterprise Software Suite

Tel Aviv — March 7, 2023 –  ClearML, an open-source MLOps platform, today announced it has been certified to run NVIDIA AI Enterprise, an end-to-end platform for building accelerated production AI. ClearML said the certification makes its MLOps platform more efficient across workflows, enabling optimization of NVIDIA GPUs. It also ensures that ClearML is compatible with and optimized for NVIDIA DGX […]

NVIDIA Announces Market Adoption of H100 GPUs and Quantum-2 Infiniband, including by Microsoft Azure

SC22, Dallas — NVIDIA today announced broad adoption of its next-generation H100 Tensor Core GPUs and Quantum-2 InfiniBand, including new offerings on Microsoft Azure cloud and more than 50 new partner systems for accelerating scientific discovery. NVIDIA partners described the new offerings at SC22, where the company released  updates to its cuQuantum, CUDA and BlueField DOCA acceleration libraries, […]

Recent Results Show HBM Can Make CPUs the Desired Platform for AI and HPC

Third-party performance benchmarks show CPUs with HBM2e memory now have sufficient memory bandwidth and computational capabilities to match GPU performance on many HPC and AI workloads. Recent Intel and third-party benchmarks now provide hard evidence that the upcoming Intel® Xeon® processors codenamed Sapphire Rapids with high bandwidth memory (fast, high bandwidth HBM2e memory) and Intel® Advanced Matrix Extensions can match the performance of GPUs for many AI and HPC workloads.

Successfully Deploy Composable Infrastructure on the Edge to Improve HPC and AI Outside of Traditional Data Centers

Emerging CDI technologies allow you to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. You also benefit from extreme flexibility, being able to dynamically recompose systems and support nearly any workload. Thanks to innovative engineering, these benefits are now available on the edge. ​

Advancing HPC through oneAPI Heterogeneous Programming in Academia & Research

oneAPI offers an open industry effort supported by over 100 organizations. oneAPI is an open, unified, cross-architecture programming model for CPUs and accelerator architectures (GPUs, FPGAs, and others). Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated compute without proprietary lock-in, while enabling the integration of existing code.