Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

PCIe 5.0 will supercharge AI at the Edge if it’s done right

PCIe Gen 5 is a key technology for driving transportable or edge AI systems to higher performance, especially those with demanding space, environmental or cooling needs. But AI program managers should evaluate their technology suppliers’ Gen 5 implementations to ensure they fully realize the technology’s benefits.

NVIDIA Partners With Azure to Build Massive Cloud AI Supercomputer

NVIDIA today announced a multi-year collaboration with Microsoft to build what the companies said will be one of the most powerful AI supercomputers in the world, powered by Microsoft Azure’s supercomputing infrastructure combined with NVIDIA GPUs, networking and stack of AI software to help enterprises train, deploy and scale AI. Azure’s cloud-based AI supercomputer includes […]

650 Group Research: Data Center Interconnect Semiconductor Market to Approach $25 Billion in 2027

INCLINE VILLAGE, Nevada, November 15, 2022 – According to a new report published by 650 Group, “Interconnect Semiconductor Market 2022-2027,” the worldwide market for the portion of semiconductors used for interconnect in data centers will approach $25 billion in revenue by 2027. Interconnect functionality is the critical infrastructure that allows systems and semiconductors to talk […]

@HPCpodcast at SC22: An Analysis of the New TOP500 List

This special SC22 edition looks at the new TOP500 list of the world’s most powerful supercomputers, released today. It marks the 60th edition of the list, repesenting 30 years of systematic data on the highest performing computer architecture and configurations. While this TOP500 is not full of surprises, there’s a new no. 1 at the top of the GREEN500, and across all the categories of the list there’s always important historical data and valuable tea leaves pointing to future trends….

At SC22: New Offerings from NetApp-NVIDIA Partnership for Scalable, Flexible AI Deployments

Managers from two AI powerhouses, NVIDIA and NetApp, sat down with us to talk about the multi-year partnership between the two companies, along with its latest joint solutions, which are considerable. From NetApp we have Firmware Engineer Chris Weber and from NVIDIA we have Shawn Kaiser, Senior Product Manager. They discuss a slew of advancements across AI-related hardware and software capabilities including joint activity around NVIDIA’s DGX SuperPOD along with BasePOD, an expansion

Overcoming Challenges to Deep Learning Infrastructure

With use cases like computer vision, natural language processing, predictive modeling, and much more, deep learning (DL) provides the kinds of far-reaching applications that change the way technology can impact human existence. The possibilities are limitless, and we’ve just scratched the surface of its potential. There are three significant obstacles for you to be aware of when designing a deep learning infrastructure: scalability, customizing for each workload, and optimizing workload performance.

Recent Results Show HBM Can Make CPUs the Desired Platform for AI and HPC

Third-party performance benchmarks show CPUs with HBM2e memory now have sufficient memory bandwidth and computational capabilities to match GPU performance on many HPC and AI workloads. Recent Intel and third-party benchmarks now provide hard evidence that the upcoming Intel® Xeon® processors codenamed Sapphire Rapids with high bandwidth memory (fast, high bandwidth HBM2e memory) and Intel® Advanced Matrix Extensions can match the performance of GPUs for many AI and HPC workloads.

Immersion Cooling for Transportable HPC

In this sponsored post from our friends over at One Stop Systems, Product Marketing Manager, Braden Cooper discusses how the latest high-performance computing systems for AI applications generate more heat than ever before. Datacenters have begun adoption of immersion cooling solutions that immerse the temperature-sensitive electronics in a non-conductive fluid which efficiently dissipates the heat.

Advancing HPC through oneAPI Heterogeneous Programming in Academia & Research

oneAPI offers an open industry effort supported by over 100 organizations. oneAPI is an open, unified, cross-architecture programming model for CPUs and accelerator architectures (GPUs, FPGAs, and others). Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated compute without proprietary lock-in, while enabling the integration of existing code.

A Look Inside the AMD-HPE Blade that Drives Frontier, the World’s First Exascale Supercomputer

[SPONSORED CONTENT]  The new number 1 supercomputer in the world, the AMD-powered and HPE-built Frontier, is celebrated today, Exascale Day, as the world’s first exascale (a billion billion calculations per second) HPC system. Recognized at last spring’s ISC conference in Hamburg for having exceeded the exascale barrier, a display of the Frontier blade in HPE’s ISC booth was a focus of attention on the conference floor. We thought it would be interesting to sit down with two senior officials from AMD and HPE to talk about the Frontier blade, what’s in it, its design innovations and the anticipated, long-term impacts of the blade on leadership supercomputing and on systems used by the broader HPC industry.