In this video from the Exascale Computing in Astrophysics Conference, Tom Quinn from the University of Washington presents: Pathways to Exascale N-body Simulations.
See more talks from the Exascale Computing in Astrophysics Conference.
In this video from the Exascale Computing in Astrophysics Conference, Tom Quinn from the University of Washington presents: Pathways to Exascale N-body Simulations.
See more talks from the Exascale Computing in Astrophysics Conference.
[SPONSORED GUEST CONTENT] Scheduled Ethernet is emerging as a viable alternative to InfiniBand for AI networking. Why? Because of its ability to offer comparable performance but with greater flexibility and cost-effectiveness.
Today, every high-performance computing (HPC) workload running globally faces the same crippling issue: Congestion in the network.
Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now.
In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion, specifically:
– Why today’s network architectures are not a sustainable approach to HPC workloads
– How HPC workload congestion and latency issues are directly tied to the network architecture
– Why a direct interconnect network architecture minimizes congestion and tail latency
If you follow Taiwan-based technology analyst Dan Nystedt on social media then you’re familiar with his insightful commentary and ground-level perspectives on the global tech scene. Dan is vice president of research at TriOrient Investments, a private institutional investor active in Asian markets ….
