Search Results for: mpi

Using the Intel C++ Compiler’s Optimization Features to Improve MySQL Performance

IT operations and maintenance developers have found that just by compiling the MySQL source code with the Intel C++ Compiler and turning on its Interprocedural Optimization feature, you can improve database performance from 5 to 35% compared with other compilers. “While there may be many factors affecting MySQL performance, such as hardware and software configuration, having a thoroughly optimized MySQL package is a good place to start.”

Univa Grid Engine adds UberCloud Parallel MPI to Docker Containers

Today Univa announced the integration of UberCloud parallel application containers with Univa Grid Engine. In May of last year, Univa, a leading innovator of workload management products, announced the availability of Docker software container support with its Grid Engine 8.4.0 product, enabling enterprises to automatically dispatch and run jobs in Docker containers, from a user specified Docker image, on a Univa Grid Engine cluster.

Intel Compilers 18.0 Tune for AVX-512 ISA Extensions

Intel Compilers 18.0 and Intel Parallel Studio XE 2018 tuning software fully support the AVX-512 instructions. By widening and deepening the vector registers, the new instructions and added enhancements let the compiler squeeze more vector parallelism out of applications than before. Applications compiled with the –xCORE-AVX512 will generate an executable that utilizes these new high-performance instructions.

EuroMPI 2018

EuroMPI is the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, in particular in and related to the Message Passing Interface (MPI). The annual meeting has a long, rich tradition, and has been held in European countries. The EuroMPI 2018 edition will continue to focus […]

Video: How MVAPICH & MPI Power Scientific Research

Adam Moody from LLNL presented this talk at the MVAPICH User Group. “High-performance computing is being applied to solve the world’s most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand.”

Challenges and Opportunities for HPC Interconnects and MPI

“This talk will reflect on prior analysis of the challenges facing high-performance interconnect technologies intended to support extreme-scale scientific computing systems, how some of these challenges have been addressed, and what new challenges lay ahead. Many of these challenges can be attributed to the complexity created by hardware diversity, which has a direct impact on interconnect technology, but new challenges are also arising indirectly as reactions to other aspects of high-performance computing, such as alternative parallel programming models and more complex system usage models.”

Internode Programming With MPI and Intel Xeon Phi Processor

“While MPI was originally developed for general purpose CPUs and is widely used in the HPC space in this capacity, MPI applications can also be developed and then deployed with the Intel Xeon Phi Processor. With the understanding of the algorithms that are used for a specific application, tremendous performance can be achieved by using a combination of OpenMP and MPI.”

Test Your Knowledge with the MPI Quiz

In this video, David Henty from EPCC conducts a video-based quiz on MPI. “The multiple-choice questions are partly designed for fun to test attendees’ knowledge, but are mainly aimed at promoting discussion about MPI and its usage in real applications. All that is assumed is a working knowledge of basic MPI functionality: send, receive, collectives, derived datatypes and non-blocking communications.”

SPEC High-Performance Group Seeking Applications for New MPI Accelerator Benchmark

The Standard Performance Evaluation Corp.’s High-Performance Group (SPEC/HPG) is offering rewards of up to $5,000 and a free benchmark license for application code and datasets accepted under its new SPEC MPI Accelerator Benchmark Search Program. “Our goal is to develop a benchmark that contains real-world scientific applications and scales from a single node of a supercomputer to thousands of nodes,” says Robert Henschel, SPEC/HPG chair. “The broader the base of contributors, the better the chance that we can cover a wide range of scientific disciplines and parallel-programming paradigms.”

Intel MPI Library 2017 Focuses on Intel Multi-core/Many-Core Clusters

With the release of Intel Parallel Studio XE 2017, the focus is on making applications perform better on Intel architecture-based clusters. Intel MPI Library 2017, a fully integrated component of Intel Parallel Studio XE 2017, implements the high-performance MPI-3.1 specification on multiple fabrics. It enables programmers to quickly deliver the best parallel performance, even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment.