Douglas Eadline has a piece at Linux Journal this about frameworks that allow users to create HPC applications without MPI.
He outlines a spectrum of implementation options that starts with MPI, moves to languages like CUDA and ZPL that attempt to “protect” the user from parallel execution (although this protection can cause its own problems), and then
Just above these implied parallel languages, there is a middle ground that I like to call “Sledgehammer HPC.” These methods are not so much finished applications as they are a framework for representing your program at an abstraction level that can be easily executed in parallel. This type of approach allows you to focus on your problem at hand and not think about the number of cores or nodes. Some of these methods tend to be a bit domain/problem specific and may not fit for all problems, or it may be difficult to cast a problem in the framework, but if they do work, they allow one to throw the full weight of a cluster at your problem with minimal programming effort.
In the article Doug looks at Genetic Algorithms, Cellular Automata, and Map-Reduce.