2020 Predictions from Radio Free HPC

In this podcast, the Radio Free HPC team lays out their tech predictions for 2020. “Henry predicts that we’ll see a RISC-V based supercomputer on the TOP500 list by the end of 2020 – gutsy call on that. This is a double down on a bet that Dan and Henry have, so he’s reinforcing his position. Dan also sees 2020 as the “Year of the FPGA.”

Radio Free HPC Recaps SC19

In this podcast, the Radio Free HPC team looks back the “State Fair for Nerds” that was SC19. “At this year’s conference, we not only learned the latest discoveries in our evolving field – but also celebrated the countless ways in which HPC is improving our lives … our communities … our world. So many people worked together to make SC19 possible – more than: 780 volunteers, 370 exhibitors, 1,150 presenters, and a record 13,950 attendees.”

HDR 200Gb/s InfiniBand Sees Major Growth on Latest TOP500 List

Today the InfiniBand Trade Association (IBTA) reported the latest TOP500 List results show that HDR 200Gb/s InfiniBand accelerates 31 percent of new InfiniBand-based systems on the List, including the fastest TOP500 supercomputer built this year. The results also highlight InfiniBand’s continued position in the top three supercomputers in the world and acceleration of six of the top 10 systems. Since the TOP500 List release in June 2019, InfiniBand’s presence has increased by 12 percent, now accelerating 141 supercomputers on the List.

Mellanox Rocks the TOP500 with HDR InfiniBand at SC19

In this video, Gilad Shainer from Mellanox describes the company’s dominating  results on the TOP500 list of the world’s fastest supercomputers. “At SC19, Mellanox announced that 200 gigabit per second HDR InfiniBand accelerates 31% of the new 2019 InfiniBand systems on November’s TOP500 supercomputing list, demonstrating market demand for faster data speeds and smart interconnect technologies. Moreover, HDR InfiniBand connects the fastest TOP500 supercomputer built in 2019.”

Mellanox LongReach Appliance Extends InfiniBand Connectivity up to 40 Kilometers

Today Mellanox introduced the Mellanox Quantum LongReach series of long-distance InfiniBand switches. Mellanox Quantum LongReach systems provide the ability to seamlessly connect remote InfiniBand data centers together, or to provide high-speed and full RDMA (remote direct memory access) connectivity between remote compute and storage infrastructures. Based on the 200 gigabit HDR Mellanox Quantum InfiniBand switch, the LongReach solution provides up to two long-reach InfiniBand ports and eight local InfiniBand ports. The long reach ports can deliver up to 100Gb/s data throughput for distances of 10 and 40 kilometers.

Call for Sessions: OpenFabrics Alliance Workshop in March

The OpenFabrics Alliance (OFA) has published a Call for Sessions for its 16th annual OFA Workshop. “The OFA Workshop 2020 Call for Sessions encourages industry experts and thought leaders to help shape this year’s discussions by presenting or leading discussions on critical high-performance networking issues. Session proposals are being solicited in any area related to high performance networks and networking software, with a special emphasis on the topics for this year’s Workshop. In keeping with the Workshop’s emphasis on collaboration, proposals for Birds of a Feather sessions and panels are particularly encouraged.”

Building Oracle Cloud Infrastructure with Bare-Metal

In this video, Taylor Newill from Oracle describes how the Oracle Cloud Infrastructure delivers high performance for HPC applications. “From the beginning, Oracle built their bare-metal cloud with a simple goal in mind: deliver the same performance in the cloud that clients are seeing on-prem.”

Video: InfiniBand In-Network Computing Technology and Roadmap

Rich Graham from Mellanox gave this talk at the UK HPC Conference. “In-Network Computing transforms the data center interconnect to become a “distributed CPU”, and “distributed memory”, enables to overcome performance barriers and to enable faster and more scalable data analysis. HDR 200G InfiniBand In-Network Computing technology includes several elements – Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), smart Tag Matching and rendezvoused protocol, and more. This session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap.”

Designing Scalable HPC, Deep Learning, Big Data, and Cloud Middleware for Exascale Systems

DK Panda from Ohio State University gave this talk at the UK HPC Conference. “This talk will focus on challenges in designing HPC, Deep Learning, Big Data and HPC Cloud middleware for Exascale systems with millions of processors and accelerators. For the HPC domain, we will discuss about the challenges in designing runtime environments for MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (Xeon, ARM and OpenPower), high-performance networks, and GPGPUs (including GPUDirect RDMA).”

Harvard Names New Lenovo HPC Cluster after Astronomer Annie Jump Cannon

Harvard has deployed a liquid-cooled supercomputer from Lenovo at it’s FASRC computing center. The system, named “Cannon” in honor of astronomer Annie Jump Cannon, is a large-scale HPC cluster supporting scientific modeling and simulation for thousands of Harvard researchers. “This new cluster will have 30,000 cores of Intel 8268 “Cascade Lake” processors. Each node will have 48 cores and 192 GB of RAM.”