Advancing HPC through oneAPI Heterogeneous Programming in Academia & Research

oneAPI offers an open industry effort supported by over 100 organizations. oneAPI is an open, unified, cross-architecture programming model for CPUs and accelerator architectures (GPUs, FPGAs, and others). Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated compute without proprietary lock-in, while enabling the integration of existing code.

A Look Inside the AMD-HPE Blade that Drives Frontier, the World’s First Exascale Supercomputer

[SPONSORED CONTENT]  The new number 1 supercomputer in the world, the AMD-powered and HPE-built Frontier, is celebrated today, Exascale Day, as the world’s first exascale (a billion billion calculations per second) HPC system. Recognized at last spring’s ISC conference in Hamburg for having exceeded the exascale barrier, a display of the Frontier blade in HPE’s ISC booth was a focus of attention on the conference floor. We thought it would be interesting to sit down with two senior officials from AMD and HPE to talk about the Frontier blade, what’s in it, its design innovations and the anticipated, long-term impacts of the blade on leadership supercomputing and on systems used by the broader HPC industry.

AWS Announces GA of EC2 Trn1 Instances for ML Model Training 

SEATTLE — Oct. 10, 2022 — Amazon Web Services today announced the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn1 instances powered by AWS-designed Trainium chips. Trn1 instances are built for high-performance training of machine learning models in the cloud. AWS said the offering saves up to 50 percent cost-to-train savings over comparable […]

Supercharging Modern Data Centers with NVIDIA Networking Solutions

In this webinar sponsored by PNY Technologies, you will learn the benefits, functionalities, and key features of Accelerated Ethernet technology by NVIDIA Spectrum and how it delivers end-to-end innovations and synergies to optimize modern applications from core to cloud to edge.

‘Shaheen III’: KAUST Selects HPE Cray EX HPC-AI Supercomputer with NVIDIA and AMD Chips

Hewlett Packard Enterprise has announced that King Abdullah University of Science and Technology (KAUST) selected HPE to build its next-generation supercomputer, “Shaheen III,” using the HPE Cray EX supercomputer platform. The company said the system will be fully operational in 2023. Seven HPE Cray EX4000 cabinets will include 704 GPU compute nodes, and each node will […]

Accelerating the Modern Data Center – Gear Up for AI

Modern applications are transforming every business. From AI for better customer engagement, to data analytics for forecasting, to advanced visualization for product innovation, the need for accelerated computing is rapidly increasing. But enterprises face challenges with using existing infrastructure to power these applications.

Improving AI Inference Performance with GPU Acceleration in Aerospace and Defense

The aerospace/defense industry often must solve mission-critical problems as they arise while also planning and designing for the rigors of future workloads. Technology advancements let aerospace/defense agencies gain the benefits of AI, but it’s essential to understand these advancements and the infrastructure requirements for AI training and inference.

Photonics Company Lightmatter Names Google TPU Engineer Richard Ho VP of Hardware Engineering

BOSTON — Photonics company Lightmatter has named Richard Ho its new Vice President of Hardware Engineering. Ho spent nearly nine years at Google leading the Cloud Tensor Processing Units (TPU) project. At Lightmatter, Ho will spearhead Lightmatter’s chip engineering division with focus on developing and deploying Lightmatter’s photonic AI accelerator and wafer-scale interconnect, designed for […]

Overcome Form Factor and Field Limitations with AI/HPC Workloads on the Edge

In this sponsored post, our friends over at Silicon Mechanics discuss how form factor, latency, and power can all be key limitations, but find out how key advancements in technology will allow higher performance at the edge. For this discussion, the edge means any compute workloads taking place outside of both cloud and traditional on-prem data centers.

HPE and Cerebras to Install AI Supercomputer at Leibniz Supercomputing Centre

The Leibniz Supercomputing Centre (LRZ), Cerebras Systems, and Hewlett Packard Enterprise (HPE), today announced the joint development of a system designed to accelerate scientific research and innovation in AI at Leibniz Supercomputing Centre (LRZ), an institute of the Bavarian Academy of Sciences and Humanities (BAdW). The system is purpose-built for scientific research and is comprised […]