This week Lawrence Livermore National Laboratory broke ground on a modular and sustainable supercomputing facility that will provide a flexible infrastructure able to accommodate the Laboratory’s growing demand for HPC.
This week Lawrence Livermore National Laboratory broke ground on a modular and sustainable supercomputing facility that will provide a flexible infrastructure able to accommodate the Laboratory’s growing demand for HPC.
[SPONSORED GUEST ARTICLE] How exascale systems have been stood up has been recounted in detail, as has the dramatic moment when Frontier achieved exascale status. Now the focus has shifted to the work research organizations are doing with exascale, how it’s actually changing the world ….
Today, every high-performance computing (HPC) workload running globally faces the same crippling issue: Congestion in the network.
Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now.
In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion, specifically:
– Why today’s network architectures are not a sustainable approach to HPC workloads
– How HPC workload congestion and latency issues are directly tied to the network architecture
– Why a direct interconnect network architecture minimizes congestion and tail latency