Michael Wolfe, a compiler engineer at The Portland Group, writing at HPCwire this week about his lack of enthusiasm for efforts to make parallel programming “easy”
Tim Mattson (Intel) points out that in “our quest to find that perfect language to make parallel programming easy,” we have come up with an alarming array of parallel programming choices: MPI, OpenMP, Ct, HPF, TBB, Erlang, Shmem, Portals, ZPL, BSP, CHARM++, Cilk, Co-array Fortran, PVM, Pthreads, Windows threads, Tstreams, GA, Java, UPC, Titanium, Parlog, NESL, Split-C, and on and on.
…Every time I see someone claiming they’ve come up with a method to make parallel programming easy, I can’t take them seriously….All this is folly. I agree with Andrew Tanenbaum, quoted at the June 2008 Usenix conference: “Sequential programming is really hard, and parallel programming is a step beyond that.”
He does, however, have some constructive suggestions for how we can make real progress.
The current “parallelism crisis” can only be resolved by three things. First, we need to develop and, more importantly, teach a range of parallel algorithms.
…Second, we need to expand algorithm analysis to include different parallelism styles. It’s not enough to focus on just the BSP or SIMD or any other model; we must understand several models and how they map onto the target systems.
…Finally, we need to learn how to analyze and tune actual parallel programs.