Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


AMD Showcases 1 Petaflop “Project 47” Rack at SIGGRAPH

“Project 47 boasts 1 PetaFLOPS of compute power at full 32-bit precision delivering a stunning 30 GigaFLOPS/W, demonstrating dramatic compute efficiency. It boasts more cores, threads, compute units, IO lanes and memory channels in use at one time than in any other similarly configured system ever before. The incredible performance-per-dollar and performance-per-watt of Project 47 makes supercomputing a more affordable reality than ever before, whether for machine learning, virtualization or rendering.”

Interview: Hot Interconnects Conference to Focus on Next-Generation Networks

The Hot Interconnects Conference is coming up Aug. 28-30 in Santa Clara. To learn more, we caught up with Program Chairs Ryan Grant and Jitu (Jitendra) Padhye. “Hot Interconnects brings together members of the industrial, academic and broader research community to unveil the very latest advances in network technologies as well as to discuss ideas for future generation interconnects. Unlike other conferences, Hot Interconnects is focused on only the latest most topical subjects and concentrates on technologies that will be available for deployment in the near future.”

Agenda Posted: August MVAPICH User Group Meeting in Ohio

The MVAPICH User Group Meeting (MUG) has posted its meeting agenda. The event takes place August 14-16, 2017 in Columbus, Ohio. “As the annual gathering of MVAPICH2 users, researchers, developers, and system administrators, the MUG event includes Keynote Talks, Invited Tutorials, Invited Talks, Contributed Presentations, Open MIC session, and hands-on sessions.”

Agenda Posted: Hot Interconnects Conference in Santa Clara

The Hot Interconnects conference has posted their program Agenda. The event takes place Aug. 28-30 in Santa Clara, California. “Join us for our 25th year of an information-packed three-day Symposium about the latest in High Performance Interconnects. IEEE Hot Interconnects is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, and data centers. Leaders in industry and academia attend the conference to interact with individuals at the forefront of this field.”

Dell Builds Bracewell Supercomputer for Bionic Vision Research at CSIRO in Australia

Today CSIRO, Australia’s top science agency, announced deployment of a new Dell EMC supercomputer, kicking off a new generation of research in artificial intelligence. “This new system will provide greater scale and processing power we need to build our computer vision systems by optimization of processing over broader scenarios, represented by much larger sets of images, to help train the software to understand and represent the world. We’ll be able to take our computer vision research to the next level, solving problems through leveraging large scale image data that most labs around the world aren’t able to.” Assoc. Professor Barnes said.

Intersect360 Research Site Census Looks at HPC Interconnects

“Intersect360 Research has posted an Executive Summary of their most recent HPC User Site Census on Interconnect Suppliers. The report provides an examination of the server interconnects and network characteristics found at a sample of HPC user sites. Intersect360 Research surveyed a broad range of users about their current computer system installations, storage systems, networks, middleware, and software supporting these computer installations.”

InfiniBand Continues Momentum on Latest TOP500

Today the InfiniBand Trade Association (IBTA) announced that InfiniBand remains the most used HPC interconnect on the TOP500. Additionally, the majority of newly listed TOP500 supercomputers are accelerated by InfiniBand technology. These results reflect continued industry demand for InfiniBand’s unparalleled combination of network bandwidth, low latency, scalability and efficiency.

As demonstrated on the June 2017 TOP500 supercomputer list, InfiniBand is the high-performance interconnect of choice for HPC and Deep Learning platforms,” said Bill Lee, IBTA Marketing Working Group Co-Chair. “The key capabilities of RDMA, software-defined architecture, and the smart accelerations that the InfiniBand providers have brought with their offering resulted in enabling world-leading performance and scalability for InfiniBand-connected supercomputers.”

Video: New Mellanox SHIELD Technology Enables Self Healing Networks

SHIELD is an innovative interconnect technology that improves data center fault recovery by 5000 times by enabling interconnect autonomous self-healing capabilities. “The CPU-centric data center architecture has come to an end and new data centers are now built based on a data-centric architecture. These systems require an intelligent interconnect that can deliver In-Network Computing and Self-healing capabilities, to ensure highest performance, scalability and resiliency.”

Video: InfiniBand Accelerates Majority of New Systems on TOP500 List

In this video from ISC 2017, Gilad Shainer from Mellanox discusses the company’s newest announcements for HPC: New Shield Technology Brings Self Healing to Networks, InfiniBand continues to grow on the TOP500, and how RDMA is enabling Machine Learning at Scale.

Kyushu University Orders Fujitsu Supercomputer

Today Fujitsu announced an order from the Research Institute for Information Technology at Kyushu University for a new supercomputer system for deployment in October 2017. “This system will consist of over 2,000 servers, including the Fujitsu Server PRIMERGY CX400, the next-generation model of Fujitsu’s x86 server. It is expected to offer top-class performance in Japan, providing a theoretical peak performance of about 10 petaflops. This will also be Japan’s first supercomputer system featuring a large-scale private cloud environment constructed on a front-end sub system, linked with a computational server of a back-end sub system through a high-speed file system.”