In brief, tomorrow’s cluster applications will look a lot like today’s PC applications. The market right now for cluster software is so small that most codes tend to be written in-house. Eventually these programs are released publicly through an open source license, but still retain their home-grown feel. Even commercial software in this space feels home grown. There have been expectations that the government would step-in to provide resources for software communities to support scientists and engineers, but such a move is really a temporary solution.
It seems that a more long-term answer will be to adapt existing codes for parallel or distributed computing. Indeed, MATLAB and Mathematica both offer such a capability through add-on packages. The “standing on the shoulders of giants” motif has been explored before here in an earlier discussion on commoditization in the HPC community.
Thus, there appears to be a position for companies to help independent software vendors move their existing PC codes onto clusters. Indeed, this appears to be Microsoft’s strategy for offering a cluster version of Windows. Perhaps an enterprising Linux entrepreneur may be able to offer similar services in the form of development tools and / or consulting.
In sum, it appears that applications (including case-specific languages) will be modified to execute on parallel or distributed computing, as this is much cheaper than developing software from scratch.