The multicore software crisis: I blame myself

Print Friendly, PDF & Email

Dan Reed has a very thoughtful post on his blog about the high end computing community’s failure to follow through on parallel research started a decade ago, and the trouble that failure is creating for software developers today facing a multicore future:

We began to address the parallel software problem a decade ago, with research projects in automatic parallelization and data parallel languages, driven by high-end computing. These approaches offered options to express large-scale parallelism while hiding many of the low-level details of message passing. They were immature and incomplete, but promising.

However, we abandoned these research directions when they did not quickly yield commercial quality solutions. We forgot that it took over a decade to develop effective programming idioms and vectorizing compilers, a much simpler and restricted special case of parallel computing. Simply put, we might now be confidently exploiting data parallelism, even for irregular problems, on today’s consumer multicore designs if we had stayed the high-end research course.

This is, I think, a failure of the US research establishment to focus on long term investments. This started in the mid-90s and is, thankfully, starting to change.