The good thing about standards is if you don’t like one you can always just pick another.
French programming tools company CAPS Entreprise and compiler maker PathScale made a joint announcement this week about their intention to push CAPS’ GPU programming approach as an open standard
PathScale Inc., an industry leader in delivering high performance AMD64 and Intel64 compilers, today announced its new PathScale ENZO Compiler Suite will support the NVIDIA GPUs using the HMPP directive-based programming model originally developed by CAPS. Today also marks the start to CAPS and PathScale jointly working on advancing the HMPP directives as a new open standard. The new ENZO Compiler Suite is available for testing by selected customers and will be generally available later this summer.
HMPP is a directive-based approach to hybrid CPU/GPU programming that CAPS hopes customers will adopt because it keeps code from being tied to a specific vendor accelerator programming library (like CUDA). But, CAPS and PathScale aren’t exactly the first to this party. PGI has been pursuing it’s own approach to platform agnostic hybrid CPU/GPU programming for some time (although it is not open), Intel has a whole slew of language, libraries, and frameworks in development (some open), and of course OpenCL is already out there with a lot of community support.
These companies have a lot of community building if this is to be a successful effort. I would much rather have seen them push into something like OpenCL and try to make that better.
[UPDATE 06252010] Michael Feldman did some digging with PathScale and has a pretty interesting article on this move that I’d recommend you read. Here’s an interesting excerpt from that article that at least shows some of the thinking
PathScale opted for HMPP directives, a set of directives invented by CAPS Enterprise for their C and Fortran GPU compilers. In the CAPS products though, the compiler just converts the HMPP C or HMPP Fortran to CUDA, which is subsequently converted into GPU assembly by NVIDIA’s CUDA back-end. PathScale, on the other hand, has attached their own back-end onto the HMPP front-end without losing any information between source-to-source translations.