Eadline on Comparing MPI and OpenMP

Print Friendly, PDF & Email

Dr. Doug Eadline has written a great article for those new to the computational arts in comparing MPI and OpenMP.  With the advent of multi core processors, one has a myriad of decisions to make on how you natively express algorithmic parallelism in your application.  Its quite feasible to consider buying a workstation with more than twenty four cores.  A system such as this might be ‘good enough’ for your specific application.  Does an architecture such as this require MPI, or will OpenMP suffice for writing parallel code?

MPI is often talked about as though it is a computer language on its own. In reality, MPI is an API (Applications Programming Interface), or programming library that allows Fortran and C (and sometimes C++) programs to send messages to each other.

Another method to express parallelism is OpenMP. Unlike MPI, OpenMP is not an API, but an extension to a compiler. To use OpenMP, the programmer adds “pragmas” (comments) to the program that are used as hints by the compiler. The resulting program uses operating system threads to run in parallel.

Doug presents some great background on both parallel programming methodologies.  He also provides some basic examples on compiling and running both MPI and OpenMP.  Ultimately, you need to consider your platform and runtime expectations in order to realistically decide which [or both] methodology will work best for you.  Kudos to the good doctor for another great article.

For more info, read Doug’s full article here.