Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

NVIDIA Partners With Azure to Build Massive Cloud AI Supercomputer

NVIDIA today announced a multi-year collaboration with Microsoft to build what the companies said will be one of the most powerful AI supercomputers in the world, powered by Microsoft Azure’s supercomputing infrastructure combined with NVIDIA GPUs, networking and stack of AI software to help enterprises train, deploy and scale AI. Azure’s cloud-based AI supercomputer includes […]

NVIDIA Announces Market Adoption of H100 GPUs and Quantum-2 Infiniband, including by Microsoft Azure

SC22, Dallas — NVIDIA today announced broad adoption of its next-generation H100 Tensor Core GPUs and Quantum-2 InfiniBand, including new offerings on Microsoft Azure cloud and more than 50 new partner systems for accelerating scientific discovery. NVIDIA partners described the new offerings at SC22, where the company released  updates to its cuQuantum, CUDA and BlueField DOCA acceleration libraries, […]

TOP500: Frontier Maintains Big Lead, Europe at Nos. 3 and 4, China Quiet

The new TOP500 list of the world’s most powerful supercomputers, released today at the SC22 conference in Dallas, while short on surprises underlines several significant HPC trends. First the headline: the HPE-built, AMD-powered Frontier system, which was crowned the world’s first exascale-class system when the previous TOP500 list was released last spring, remains at the top of the list, delivering nearly three times the power of its nearest rival on the list. Frontier remains at 1.102 exaFLOPS….

SiPearl and AMD in Partnership for Exascale Supercomputing in Europe

Maisons-Laffitte (France), 14th November 2022 – SiPearl, the HPC microprocessor designer for European supercomputers, and AMD announced a joint offering for exascale supercomputing in Europe combining SiPearl’s HPC microprocessor, Rhea, with AMD Instinct accelerators. The latter chip, along with AMD EPYC CPUs, power Frontier, the world’s first exascale-class system at Oak Ridge National Laboratory. Initially, […]

At SC22: New Offerings from NetApp-NVIDIA Partnership for Scalable, Flexible AI Deployments

Managers from two AI powerhouses, NVIDIA and NetApp, sat down with us to talk about the multi-year partnership between the two companies, along with its latest joint solutions, which are considerable. From NetApp we have Firmware Engineer Chris Weber and from NVIDIA we have Shawn Kaiser, Senior Product Manager. They discuss a slew of advancements across AI-related hardware and software capabilities including joint activity around NVIDIA’s DGX SuperPOD along with BasePOD, an expansion

Overcoming Challenges to Deep Learning Infrastructure

With use cases like computer vision, natural language processing, predictive modeling, and much more, deep learning (DL) provides the kinds of far-reaching applications that change the way technology can impact human existence. The possibilities are limitless, and we’ve just scratched the surface of its potential. There are three significant obstacles for you to be aware of when designing a deep learning infrastructure: scalability, customizing for each workload, and optimizing workload performance.

Recent Results Show HBM Can Make CPUs the Desired Platform for AI and HPC

Third-party performance benchmarks show CPUs with HBM2e memory now have sufficient memory bandwidth and computational capabilities to match GPU performance on many HPC and AI workloads. Recent Intel and third-party benchmarks now provide hard evidence that the upcoming Intel® Xeon® processors codenamed Sapphire Rapids with high bandwidth memory (fast, high bandwidth HBM2e memory) and Intel® Advanced Matrix Extensions can match the performance of GPUs for many AI and HPC workloads.

HPE Introduces ProLiant Gen11 Servers for On-prem or via GreenLake As-a-Service

Hewlett Packard Enterprise (NYSE: HPE) today announced new ProLiant Gen11 servers available for on-premises infrastructures or through HPE’s GreenLake as-a-service platform. The new servers are designed for compute- and data-intensive workloads, such as AI, machine learning, analytics, rendering, Virtual Desktop Infrastructure (VDI) and virtualization. The servers support several architectures, including 4th Generation AMD EPYC processors, […]

Successfully Deploy Composable Infrastructure on the Edge to Improve HPC and AI Outside of Traditional Data Centers

Emerging CDI technologies allow you to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. You also benefit from extreme flexibility, being able to dynamically recompose systems and support nearly any workload. Thanks to innovative engineering, these benefits are now available on the edge. ​

Advancing HPC through oneAPI Heterogeneous Programming in Academia & Research

oneAPI offers an open industry effort supported by over 100 organizations. oneAPI is an open, unified, cross-architecture programming model for CPUs and accelerator architectures (GPUs, FPGAs, and others). Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated compute without proprietary lock-in, while enabling the integration of existing code.