Parallel programming isn't ever going to be easy

Print Friendly, PDF & Email

Michael Wolfe, a compiler engineer at The Portland Group, writing at HPCwire this week about his lack of enthusiasm for efforts to make parallel programming “easy”

Tim Mattson (Intel) points out that in “our quest to find that perfect language to make parallel programming easy,” we have come up with an alarming array of parallel programming choices: MPI, OpenMP, Ct, HPF, TBB, Erlang, Shmem, Portals, ZPL, BSP, CHARM++, Cilk, Co-array Fortran, PVM, Pthreads, Windows threads, Tstreams, GA, Java, UPC, Titanium, Parlog, NESL, Split-C, and on and on.

…Every time I see someone claiming they’ve come up with a method to make parallel programming easy, I can’t take them seriously….All this is folly. I agree with Andrew Tanenbaum, quoted at the June 2008 Usenix conference: “Sequential programming is really hard, and parallel programming is a step beyond that.”

He does, however, have some constructive suggestions for how we can make real progress.

The current “parallelism crisis” can only be resolved by three things. First, we need to develop and, more importantly, teach a range of parallel algorithms.

…Second, we need to expand algorithm analysis to include different parallelism styles. It’s not enough to focus on just the BSP or SIMD or any other model; we must understand several models and how they map onto the target systems.

…Finally, we need to learn how to analyze and tune actual parallel programs.

Comments

  1. The parallelism crisis looks to me a lot like the evolution of object-oriented architecture and design in the 90’s and early 00’s. A really useful new toolbox arrived and we didn’t know how to use it. So we wrote the same functional programs with really deep inheritance and took a few steps backwards. Then a few people who really got software architecture, O-O and compilers started to lead by example, and we built the necessary critical mass of O-O skills.

    The same will happen with parallelism. We’re at the 1992 C++ equivalent of “Oooh, look, virtual functions and inheritance, that’s cool” stage of parallelism. We’re not going to crack parallelism wide open until we have that critical mass of brains on it.

  2. That list doesn’t include Excel. It’s hard to see the forest.

  3. John Leidel says

    I would have to agree with Damien. At the end of the day, we really need to converge as an industry and first decide what the real problems are. I believe there are plenty of individuals who understand how to construct parallel algorithms, with a lack of tools to do so. Why are we constantly re-writing SCALAPACK routines for new applications?

    There is certainly a light at the end of the tunnel. I see glimmers of great things to come in several infant language [extensions].

  4. LabVIEW has been parallel since its inception in 1986, multi-threaded since 1999, and multi-core now. See: http://www.ni.com/multicore/

  5. I’m not a programmer or much of a mathematician, but I love reading about this stuff anyway.