Is Parallel Programming Hard?

Print Friendly, PDF & Email

Dr. Guy Blelloch of Carnegie Mellon University has written an article for the folks at CilkArts analyzing why parallel programming seems to be more difficult than sequential programming.  He quickly notes that, ” at the right level of abstraction, it is not and in fact can be as easy as sequential programming.”  The key point being, the right level of abstraction.  He goes on to break the difficulty of parallel programming into three main problems.

.: Parallel Thinking: Most of us have been gracefully taught to program sequentially since the days we were barely tall enough to reach a keyboard [or a punch card].  This permeation of sequential thinking through all things related to code has become so ingrained in us, its simply difficult to change.

.: Wheat from Chaff: Most existing parallel programming environments don’t separate the specific details of machine/architecture-specific parallelism from the actual parallel algorithm development.  We spend an overwhelming amount of time dealing with parallel constructs, rather than worrying about the core algorithms.

.: Determinism: …or lack there of.  We currently lack support for deterministic parallelism.  Eg, programs “for which the results and partial results are the same independent of how the computation is scheduled or the relative timing of instruction among processors”

Dr. Blelloch goes on to talk about each in great detail.  I’ll leave it up to you to read his entire article.  Before you scurry off and click your mouse, I’d like to add a bit to his comments.  First, I’d have to agree with the good doctor.  Speaking from experience, one can easily spend an exhorbitant amount of time debugging things such as race conditions and communication schemas.  This, versus scratching your head in front of a nicely worn whiteboard with parallel algorithms thrown about.

Given the context of the article, do we have a solution?  My first thought would be no.  I don’t believe there is one construct that solves it all.  OpenMP has its upsides, MPI is tride and true, UPC can be very cool, CAF is making some headway and Cilk++ shows a lot of promise.  At the end of the day, we should probably sit back, take a deep breath and analyze what has worked and/or failed for us over the last thirty years of trying to make these hair-brained computers do such wacky things.  Until then, you should read Blelloch’s full article here.

Comments

  1. Certainly it is time to stand back and take a more global look at what has been done in parallelism for the past 30 years.

    At Carnegie Mellon we have been running a “PROBE” on Parallel Thinking. The idea is to try to identify the core ideas in parallelism (the “wheat”) and how they fit together. By core we mean an idea that is likely to still be important 20 or even 50 years from now. The goal is to help guide curricula development but hopefully such a study can be more generally useful.