Overview of the MVAPICH Project and Future Roadmap

“This talk will provide an overview of the MVAPICH project (past, present and future). Future roadmap and features for upcoming releases of the MVAPICH2 software family (including MVAPICH2-X, MVAPICH2-GDR, MVAPICH2-Virt, MVAPICH2-EA and MVAPICH2-MIC) will be presented. Current status and future plans for OSU INAM, OEMT and OMB will also be presented.”

Challenges and Opportunities for HPC Interconnects and MPI

“This talk will reflect on prior analysis of the challenges facing high-performance interconnect technologies intended to support extreme-scale scientific computing systems, how some of these challenges have been addressed, and what new challenges lay ahead. Many of these challenges can be attributed to the complexity created by hardware diversity, which has a direct impact on interconnect technology, but new challenges are also arising indirectly as reactions to other aspects of high-performance computing, such as alternative parallel programming models and more complex system usage models.”

Internode Programming With MPI and Intel Xeon Phi Processor

“While MPI was originally developed for general purpose CPUs and is widely used in the HPC space in this capacity, MPI applications can also be developed and then deployed with the Intel Xeon Phi Processor. With the understanding of the algorithms that are used for a specific application, tremendous performance can be achieved by using a combination of OpenMP and MPI.”

OSC Hosts fifth MVAPICH Users Group

A broad array of system administrators, developers, researchers and students who share an interest in the MVAPICH open-source library for high performance computing will gather this week for the fifth meeting of the MVAPICH Users Group (MUG). “Dr. Panda’s library is a cornerstone for HPC machines around the world, including OSC’s systems and many of the Top 500,” said Dave Hudak, Ph.D., interim executive director of OSC. “We’ve gained a lot of insight and expertise from partnering with DK and his research group throughout the years.”

Test Your Knowledge with the MPI Quiz

In this video, David Henty from EPCC conducts a video-based quiz on MPI. “The multiple-choice questions are partly designed for fun to test attendees’ knowledge, but are mainly aimed at promoting discussion about MPI and its usage in real applications. All that is assumed is a working knowledge of basic MPI functionality: send, receive, collectives, derived datatypes and non-blocking communications.”

Register Now for the SDSC Summer Institute on HPC

“This year’s workshop continues SDSC’s strategy of bringing high-performance computing to what is known as the ‘long tail’ of science, i.e. providing resources to a larger and more diverse number of modest-sized computational research projects that represent, in aggregate, a tremendous amount of scientific research and discovery. SDSC has developed and hosted Summer Institute workshops for well over a decade.”

Job of the Week: HPC System Administrator at Embry-Riddle Aeronautical University

Embry-Riddle Aeronautical University in Daytona Beach is seeking a High Performance Computing System Administrator in our Job of the Week. “The HPC Specialist is responsible for technical systems management, administration, and support for the high-performance computing (HPC) cluster environments. This includes all configuration, authentication, networking, storage, interconnect, and software usage & installation of HPC Clusters. The position is highly technical and directly impacts the daily operational functions of the above environments.”

GTC to Feature 90 Sessions on HPC and Supercomputing

Accelerated computing continues to gain momentum. This year the GPU Technology Conference will feature 90 sessions on HPC and Supercomputing. “Sessions will focus on how computational and data science are used to solve traditional HPC problems in healthcare, weather, astronomy, and other domains. GPU developers can also connect with innovators and researchers as they share their groundbreaking work using GPU computing.”

Intel MPI Library 2017 Focuses on Intel Multi-core/Many-Core Clusters

With the release of Intel Parallel Studio XE 2017, the focus is on making applications perform better on Intel architecture-based clusters. Intel MPI Library 2017, a fully integrated component of Intel Parallel Studio XE 2017, implements the high-performance MPI-3.1 specification on multiple fabrics. It enables programmers to quickly deliver the best parallel performance, even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment.

Intel DAAL Accelerates Data Analytics and Machine Learning

Intel DAAL is a high-performance library specifically optimized for big data analysis on the latest Intel platforms, including Intel Xeon®, and Intel Xeon Phi™. It provides the algorithmic building blocks for all stages in data analysis in offline, batch, streaming, and distributed processing environments. It was designed for efficient use over all the popular data platforms and APIs in use today, including MPI, Hadoop, Spark, R, MATLAB, Python, C++, and Java.