Sign up for our newsletter and get the latest HPC news and analysis.

John Shalf Talks Programming Languages

International Science and Grid This Week [online publication] recently posted an interview they had with John Shalf of NERSC.  John is the Team Lead for the Advanced Technologies Group at the National Energy Research Scientific Computing Center [NERSC].  The interview focused on what has become quite a heated debate within HPC as of late: programming languages [really paradigms].

John begins by explaining, quite concisely, why native parallel programming paradigms [say that three times fast!] are important into today’s HPC.

Clock frequencies of chips are no longer increasing, so all future improvements in computer speed will come from parallelism. This affects computing devices all the way down to the scale of cell phones.Many programmers still think in terms of converting serial programs to run in parallel, but that approach can be very limiting because the best parallel algorithms are completely different from their serial counterparts. Parallel languages can make it easier to write parallel algorithms and the resulting code will run more efficiently because the compiler will have more information to work with. [John Shalf]

John goes on to explain several current efforts to parallelize a sequential language.  Such efforts include CILK, Intel’s Ct, OpenMP, OpenCL and NVIDIA’s CUDA.  He also points out the common deficiencies of these extensions to sequential language constructs.

I worry in general that serial languages do not provide the necessary semantic guarantees about locality of effect that is necessary for efficient parallelism. Ornamenting the language to insert the semantics of such guarantees (as we do with OpenMP scoping of arrays) is arduous, prone to error, and quite frankly not very intuitive. It seems that existing sequential languages underspecify the semantic guarantees that are required to ensure safe parallelization, and overspecify the solution to a problem, which limits opportunities for parallel scheduling. [John Shalf]

I don’t want to steal all of the original article’s thunder.  Its really a well-done interview.  Check it out here.

Resource Links: