Search Results for: mpi

Podcast: How Community Collaboration Drives Compiler Technology at the LLVM Project

In this Let’s Talk Exascale podcast, Hal Finkel of Argonne National Laboratory describes how community collaboration is driving compiler infrastructure at the LLVM project. “LLVM is important to a wide swath of technology professionals. Contributions shaping its development have come from individuals, academia, DOE and other government entities, and industry, including some of the most prominent tech companies in the world, both inside and outside of the traditional high-performance computing space.”

Azure HBv2 Virtual Machines eclipse 80,000 cores for MPI HPC

Today Microsoft announced general availability of Azure HBv2-series Virtual Machines designed to deliver leadership-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. “HBv2 VMs deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world high performance computing (HPC) workloads, such as CFD, explicit finite element analysis, seismic processing, reservoir modeling, rendering, and weather simulation. Azure HBv2 VMs are the first in the public cloud to feature 200 gigabit per second HDR InfiniBand from Mellanox. 

A Performance Comparison of Different MPI Implementations on an ARM HPC System

Nicholas Brown from EPCC gave this talk at the MVAPICH User Group. “In this talk I will describe work we have done in exploring the performance properties of MVAPICH, OpenMPI and MPT on one of these systems, Fulhame, which is an HPE Apollo 70-based system with 64 nodes of Cavium ThunderX2 ARM processors and Mellanox InfiniBand interconnect. In order to take advantage of these systems most effectively, it is very important to understand the performance that different MPI implementations can provide and any further opportunities to optimize these.”

EuroMPI

The EuroMPI conference is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). This includes parallel programming interfaces, libraries and languages, architectures, networks, algorithms, tools, applications, and High Performance Computing with particular focus […]

XSEDE Campus Champions to Focus on Research Collaboration

XSEDE has selected five Campus Champions Fellows for the 2019-2020 academic year. These exceptional researchers will have the opportunity to work side-by-side with staff of the XSEDE project to solve real-world science and engineering projects. “The five Fellows selected for this year will work on projects spanning from hydrology gateways to undergraduate data science curriculum development under the overarching goal of increasing cyberinfrastructure expertise on campuses by including Campus Champions as partners in XSEDE’s projects.”

Multiple Endpoints in the Latest Intel MPI Library Boosts Hybrid Performance

The performance of distributed memory MPI applications on the latest highly parallel multi-core processors often turns out to be lower than expected. Which is why hybrid applications using OpenMP multithreading on each node and MPI across nodes in a cluster are becoming more common. This sponsored post from Intel, written by Richard Friedman, depicts how to boost performance for hybrid applications with multiple endpoints in the Intel MPI Library. 

RDMA, Scalable MPI-3 RMA, and Next-Generation Post-RDMA Interconnects

Torsten Hoefler from ETH Zurich gave this talk at the Swiss HPC Conference. “Network cards contain rather powerful processors optimized for data movement and limiting the functionality to remote direct memory access seems unnecessarily constraining. We develop sPIN, a portable programming model to offload simple packet processing functions to the network card.”

Benchmarking MPI Applications in Singularity Containers on Traditional HPC and Cloud Infrastructures

Andrei Plamada from ETH Zurich gave this talk at the hpc-ch forum on Cloud and Containers. “Singularity is a container solution that promises to both integrate MPI applications seamlessly and run containers without privilege escalation. These benefits make Singularity a potentially good candidate for the scientific high-performance computing community. However, the performance overhead introduced by Singularity is unclear. In this work we will analyze the overhead and the user experience on both traditional HPC and cloud infrastructures.”

Call for Papers: EuroMPI Conference in Zurich

The EuroMPI conference has issued its Call for Papers. The event takes place September 10-13 in Zurich, Switzerland. “The EuroMPI conference is since 1994 the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). This includes parallel programming interfaces, libraries and langauges, architectures, networks, algorithms, tools, applications, and High Performance Computing with particular focus on quality, portability, performance and scalability.”

Supercomputing Aerodynamics in Paralympic Cycling

A project carried out at the National University of Ireland Galway and Eindhoven University of Technology (TU/e) and KU Leuven has been exploring the role of aerodynamic science in Paralympic cycling. “This work also opens the door for world-class Paralympic athletes to have the same expertise and equipment available to them as other professional athletes. At the world championships and Paralympics where tenths of seconds can decide medals this work can unlock that vital time!”