Sign up for our newsletter and get the latest big data news and analysis.

9th Annual MVAPICH User Group (MUG) to Meet Aug. 23-25

The 9th Annual MVAPICH User Group (MUG) meeting will take place August 23-25, 2021 in Columbus, OH. The organizers said MUG meeting is an open forum for users, system administrators, researchers, engineers and students to share their knowledge on using MVAPICH2 libraries (including MVAPICH2-X, MVAPICH2-GDR, MVAPICH2-X-Azure, and MVAPICH2-X-AWS), OSU Micro-Benchmarks (OMB), and OSU INAM on large-scale […]

Video: Managing HPC Software Complexity with Spack

Greg Becker from LLNL gave this talk at the MVAPICH User Group. “Spack is an open-source package manager for HPC. This presentation will give an overview of Spack, including recent developments and a number of items on the near-term roadmap. We will focus on Spack features relevant to the MVAPICH community; these include Spack’s virtual package abstraction, which is used for API-compatible libraries including MPI implementations, package level compiler wrappers, and packages which modify other package’s build environments.”

The Confluence of HPC and AI – Intel Customer Use Cases

Vikram Saletore from Intel gave this talk at the MVAPICH User Group. “Intel collaborates with customers and partners worldwide to build, accelerate, scale and deploy their AI applications on Intel based HPC platforms. We share with you our insights on several customer AI use cases we have enabled, the orders of magnitude performance acceleration we have delivered via popular open-source software framework optimizations, and the best-known methods to advance the convergence of AI and HPC on Intel Xeon Scalable Processor based servers. We will also demonstrate how large memory systems help real world AI applications efficiently.”

A Performance Comparison of Different MPI Implementations on an ARM HPC System

Nicholas Brown from EPCC gave this talk at the MVAPICH User Group. “In this talk I will describe work we have done in exploring the performance properties of MVAPICH, OpenMPI and MPT on one of these systems, Fulhame, which is an HPE Apollo 70-based system with 64 nodes of Cavium ThunderX2 ARM processors and Mellanox InfiniBand interconnect. In order to take advantage of these systems most effectively, it is very important to understand the performance that different MPI implementations can provide and any further opportunities to optimize these.”

Video: InfiniBand In-Network Computing Technology and Roadmap

Gilad Shainer from Mellanox gave this talk at the MVAPICH User Group. “In-Network Computing transforms the data center interconnect to become a “distributed CPU”, and “distributed memory”, enables to overcome performance barriers and to enable faster and more scalable data analysis. These technologies are in use at some of the recent large scale supercomputers around the world, including the top TOP500 platforms. The session will discuss the InfiniBand In-Network Computing technology and performance results, as well as view to future roadmap.”

Video: Three Perspectives on Message Passing

Robert Harrison from Brookhaven gave this talk at the MVAPICH User Group. “MADNESS, TESSE/EPEXA, and MolSSI are three quite different large and long-lived projects that provide different perspectives and driving needs for the future of message passing. All three of these projects employ MPI and have a vested interest in computation at all scales, spanning the classroom to future exascale systems.”

Frontera: The Next Generation NSF HPC Resource, and Why HPC Still isn’t the Cloud

Dan Stanzione from TACC gave this talk at the MVAPICH User Group. “In this talk, I will describe the main components of the award: the Phase 1 system, “Frontera”, the plans for facility operations and scientific support for the next five years, and the plans to design a Phase 2 system in the mid-2020s to be the NSF Leadership system for the latter half of the decade, with capabilities 10x beyond Frontera. The talk will also discuss the key role MVAPICH and Infiniband play in the project, and why the workload for HPC still can’t fit effectively on the cloud without advanced networking support.”

Agenda Posted: MVAPICH User Group (MUG) Meeting in Ohio

The MVAPICH User Group Meeting (MUG 2019) has published its Speaker Agenda. The event will take place from August 19-21 in Columbus, Ohio. “MUG aims to bring together MVAPICH2 users, researchers, developers, and system administrators to share their experience and knowledge and learn from each other. The event includes Keynote Talks, Invited Tutorials, Invited Talks, Contributed Presentations, Open MIC session, hands-on sessions  MVAPICH developers, etc.”

Call for Presentations: MVAPICH User Group in August

The 7th annual MVAPICH User Group (MUG) meeting has issued its Call for Presentations. MUG will take place from August 19-21, 2019 in Columbus, Ohio. “MUG aims to bring together MVAPICH2 users, researchers, developers, and system administrators to share their experience and knowledge and learn from each other. The event includes keynote talks, invited tutorials, invited talks, contributed presentations, open MIC session, hands-on sessions with MVAPICH developers, etc.”

OSC Hosts MVAPICH Users Group this week

A broad array of HPC enthusiasts have gathered at the Ohio Supercomputer Center this week for the sixth meeting of the MVAPICH Users Group (MUG). “The Network-Based Computing Research Group is lead by DK Panda, a professor and university distinguished scholar of computer science at The Ohio State University, created and enhances the popular MVAPICH HPC system software package.”