Do we need new languages for parallel processing?

Print Friendly, PDF & Email

That’s the question that GCN author Joab Jackson pokes at in an article from last week at the Government Computer News site. The piece is relatively balanced, but aimed at the mass development community rather than to HPC specifically, where I think the answer may actually be different. From the article

“The challenge is that we have not, in general, designed our applications to express parallelism. We haven’t designed programming languages to make that easy,” said James Reinders, who works in Intel’s software-products division and is the author of a book on parallel programming titled “Intel Threading Building Blocks.”

The article then runs through the two basic approaches, extended current common languages or starting from scratch with a focus on the PGAS (Partitioned Global Address Space) languages from the DARPA HPCS effort, of which Chapel and X10 are examples. If you aren’t passingly familiar with these projects, this article can get you at least two minutes into your next encounter with a stranger at one of the SC09 vendor parties

DARPA’s new languages use an architecture called the Partitioned Global Address Space. PGAS does two things: It allows multiple processors to share a global pool of memory, but at the same time it allows the programmer to keep individual threads in specified logical partitions so they will be close to the data as possible, thereby taking advantage of the speed boost brought about by “locality,” as this is called.

“This is an attempt to get the best out of both worlds,” explained Tarek El-Ghazawi, at a PGAS Birds-of-a-Feather session held at the SC08 conference held in Austin, Texas, last winter. El-Ghazawi is a George Washington University computer science professor who has helped guide the development of PGAS

“The idea is to have multiple threads, concurrent threads…all seeing one big flat space. But in addition, the threads would locality-aware, and you as a programmer would know what parts are local and what parts are not,” he said.

The article’s conclusion is, I think, probably (though unfortunately) valid for most programming, but I’m not so sure it holds for us

“It is an interesting thought exercise to ask if we were start from scratch to build the perfect parallel programming language, what would we do? X10 and Chapel are very interesting projects and are very exciting but I don’t see them catching on in any big way,” he said. Why? They are too radically different from the programming languages most coders are used to. They would be too difficult to learn.

…”I’m skeptical of people who say we have to throw everything out about computing and start from scratch. We clearly don’t have to do that – it’s very expensive to do.” Goetz said. “I think there is an incremental path to get there, but I do think we need to change the way we think.”

Anyway, it’s worth a read.

Comments

  1. I think the right approach for HPC is incremental if it’s going to play nicely with the all the applications we’d like to have on top of it. Maybe for dedicated Top500-style apps a new language is justified that will get you another 10% on the FLOPS, and perhaps that’s how a new language will mature and make it’s way down to us mere mortals.

    It also depends on your definition of HPC. Is it the Linpack benchmark or the ability to scale up in size or up in speed for whatever problem you’re trying to solve? In the second more general case new run-time libraries like Intel TBB combined with existing MPI on C++ will deliver more bang for less buck than a whole new language, platform and learning curve.