Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


High-Performance and Scalable Designs of Programming Models for Exascale Systems

“This talk will focus on challenges in designing programming models and runtime environments for Exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI+X (PGAS – OpenSHMEM/UPC/CAF/UPC++, OpenMP, and CUDA) programming models by taking into account support for multi-core systems (KNL and OpenPower), high-performance networks, GPGPUs (including GPUDirect RDMA), and energy-awareness.”

Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV IB Clusters

Xiaoyi Lu from Ohio State University presented this talk at the Open Fabrics Workshop. “Single Root I/O Virtualization (SR-IOV) technology has been steadily gaining momentum for high performance interconnects such as InfiniBand. SR-IOV can deliver near native performance but lacks locality-aware communication support. This talk presents an efficient approach to building HPC clouds based on MVAPICH2 and RDMA-Hadoop with SR-IOV.”

Overview of the MVAPICH Project and Future Roadmap

In this video from the 4th Annual MVAPICH User Group, DK Panda from Ohio State University presents: Overview of the MVAPICH Project and Future Roadmap. “This talk will provide an overview of the MVAPICH project (past, present and future). Future roadmap and features for upcoming releases of the MVAPICH2 software family (including MVAPICH2-X, MVAPICH2-GDR, MVAPICH2-Virt, MVAPICH2-EA and MVAPICH2-MIC) will be presented. Current status and future plans for OSU INAM, OEMT and OMB will also be presented.”

Call for Participation & Student Travel Support: MVAPICH User Group

The MVAPICH User Group (MUG) meeting has issued its Call for Participation and Student Travel Support. The event takes place August 15-17 in Columbus, Ohio. “Student travel grant support is available for all students (Ph.D./M.S./Undergrad) from U.S. academic institutions to attend MUG ’16 through a funding from the U.S. National Science Foundation (NSF).”

Call for Presentations: MVAPICH User Group (MUG) Meeting

The MVAPICH User Group (MUG) meeting has issued its Call for Presentations. The event takes place August 15-17 in Columbus, Ohio.

MVAPICH User Group Returns to Ohio Aug. 19-21

The MVAPICH team has posted their agenda for the 3rd annual MVAPICH User Group (MUG) meeting. Sponsored by Mellanox and the Ohio Supercomputer Center, MUG will take place August 19-21 in Columbus, Ohio, USA.

Overview of the MVAPICH Project: Status and Roadmap

“Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, High-Speed Ethernet and RDMA over Converged Enhanced Ethernet (RoCE). The MVAPICH2 (High Performance MPI over InfiniBand, iWARP and RoCE) and MVAPICH2-X software libraries, developed by his research group, are currently being used by more than 2,150 organizations worldwide (in 72 countries).”

MVAPICH: Still Saving the World – Now Even Faster

“MPI is in the national interest. The U.S. government tasks Lawrence Livermore National Laboratory with solving the nation’s and the world’s most difficult problems. This ranges from global security, disaster response and planning, drug discovery, energy production, and climate change to name a few. To meet this challenge, LLNL scientists utilize large-scale computer simulations on Linux clusters with Infiniband networks. As such, MVAPICH serves a critical role in this effort. In this talk, I will highlight some of this recent work that MVAPICH has enabled.”

MVAPICH at Petascale: Experiences in Production on Stampede

Dan Stanzione from TACC presented this keynote at the recent MVAPICH User Group. “The Stampede system began production operations in January 2013. The system was one of the largest ever deployments of MVAPICH, with a 6,400 node FDR Infiniband fabric connecting more than 2PF of Intel Xeon processors. The system also was the first large scale installation of the Intel many core Xeon Phi Co-Processors, which also used MVAPICH for communications. This talk will discuss the experiences over the first 1.5 years of production with MVAPICH and Stampede.”