Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


PSSC Labs Launches Eco Blades for HPC

The Eco Blade is a unique server platform engineered specifically for high performance, high density computing environments – simultaneously increasing compute density while decreasing power use. Eco Blade offers two complete, independent servers within 1U of rack space. Each independent server supports up to 64 Intel Xeon processor cores and 1.0 TB of enterprise memory for a total of up to 128 Cores and 2 TB of memory per 1U.

A Recap of the 2017 OpenFabrics Workshop

The 13th Annual OpenFabrics Alliance (OFA) Workshop wrapped at the end of March with a look toward the future. The annual gathering, held this year in Austin, Texas, was devoted to advancing cutting edge networking technology through the ongoing collaborative efforts of OpenFabrics Software (OFS) producers and users. With a record 130+ attendees, the 2017 Workshop expanded on the OFA’s commitment to being an open organization by hosting an engaging Town Hall discussion and an At Large Board election, filling two newly added director seats for current members.

InfiniBand Roadmap Foretells a World Where Server Connectivity is at 1000 Gb/sec

The InfiniBand Trade Association (IBTA) has updated their InfiniBand Roadmap. With HDR 200 Gb/sec technolgies shipping this year, the roadmap looks out to an XDR world where server connectivity reaches 1000 Gb/sec. “The IBTA‘s InfiniBand roadmap is continuously developed as a collaborative effort from the various IBTA working groups. Members of the IBTA working groups include leading enterprise IT vendors who are actively contributing to the advancement of InfiniBand. The roadmap details 1x, 4x, and 12x port widths with bandwidths reaching 600Gb/s data rate HDR in 2017. The roadmap is intended to keep the rate of InfiniBand performance increase in line with systems-level performance gains.”

Rock Stars of HPC: DK Panda

As our newest Rock Star of HPC, DK Panda sat down with us to discuss his passion for teaching High Performance Computing. “During the last several years, HPC systems have been going through rapid changes to incorporate accelerators. The main software challenges for such systems have been to provide efficient support for programming models with high performance and high productivity. For NVIDIA-GPU based systems, seven years back, my team introduced a novel `CUDA-aware MPI’ concept. This paradigm allows complete freedom to application developers for not using CUDA calls to perform data movement.”

Jülich to Build 5 Petaflop Supercomputing Booster with Dell

Today Intel and the Jülich Supercomputing Centre together with ParTec and Dell today announced plans to develop and deploy a next-generation modular supercomputing system. Leveraging the experience and results gained in the EU-funded DEEP and DEEP-ER projects, in which three of the partners have been strongly engaged, the group will develop the necessary mechanisms required to augment JSC’s JURECA cluster with a highly-scalable component named “Booster” and being based on Intel’s Scalable Systems Framework (Intel SSF).

Mellanox InfiniBand Delivers up to 250 Percent Higher ROI for HPC

Today Mellanox announced that EDR 100Gb/s InfiniBand solutions have demonstrated from 30 to 250 percent higher HPC applications performance versus Omni-Path. These performance tests were conducted at end-user installations and Mellanox benchmarking and research center, and covered a variety of HPC application segments including automotive, climate research, chemistry, bioscience, genomics and more.

High Performance Interconnects – Assessments, Rankings and Landscape

Dan Olds from OrionX.net presented this talk at the Switzerland HPC Conference. “Dan Olds will present recent research into the history of High Performance Interconnects (HPI), the current state of the HPI market, where HPIs are going in the future, and how customers should evaluate HPI options today. This will be a highly informative and interactive session.”

High-Performance and Scalable Designs of Programming Models for Exascale Systems

“This talk will focus on challenges in designing programming models and runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (KNL and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness.”

Cowboy Supercomputer Powers Research at Oklahoma State

In this video, Dana Brunson from Oklahoma State describes the mission of the Oklahoma High Performance Computing Center. Formed in 2007, the HPCC facilitates computational and data-intensive research across a wide variety of disciplines by providing students, faculty and staff with cyberinfrastructure resources, cloud services, education and training, bioinformatics assistance, proposal support and collaboration.

Managing Node Configuration with 1000s of Nodes

Ira Weiny from Intel presented this talk at the OpenFabrics Workshop. “Individual node configuration when managing 1000s or 10s of thousands of nodes in a cluster can be a daunting challenge. Two key daemons are now part of the rdma-core package which aid the management of individual nodes in a large fabric: IBACM and rdma-ndd.”