CAPS and Pathscale to create a new standard for hybrid programming [UPDATED]

The good thing about standards is if you don’t like one you can always just pick another.

French programming tools company CAPS Entreprise and compiler maker PathScale made a joint announcement this week about their intention to push CAPS’ GPU programming approach as an open standard

PathScale Inc., an industry leader in delivering high performance AMD64 and Intel64 compilers, today announced its new PathScale ENZO Compiler Suite will support the NVIDIA GPUs using the HMPP directive-based programming model originally developed by CAPS. Today also marks the start to CAPS and PathScale jointly working on advancing the HMPP directives as a new open standard. The new ENZO Compiler Suite is available for testing by selected customers and will be generally available later this summer.

HMPP is a directive-based approach to hybrid CPU/GPU programming that CAPS hopes customers will adopt because it keeps code from being tied to a specific vendor accelerator programming library (like CUDA). But, CAPS and PathScale aren’t exactly the first to this party. PGI has been pursuing it’s own approach to platform agnostic hybrid CPU/GPU programming for some time (although it is not open), Intel has a whole slew of language, libraries, and frameworks in development (some open), and of course OpenCL is already out there with a lot of community support.

These companies have a lot of community building if this is to be a successful effort. I would much rather have seen them push into something like OpenCL and try to make that better.

[UPDATE 06252010] Michael Feldman did some digging with PathScale and has a pretty interesting article on this move that I’d recommend you read. Here’s an interesting excerpt from that article that at least shows some of the thinking

PathScale opted for HMPP directives, a set of directives invented by CAPS Enterprise for their C and Fortran GPU compilers. In the CAPS products though, the compiler just converts the HMPP C or HMPP Fortran to CUDA, which is subsequently converted into GPU assembly by NVIDIA’s CUDA back-end. PathScale, on the other hand, has attached their own back-end onto the HMPP front-end without losing any information between source-to-source translations.

Comments

  1. PathScale is alive again ? Thought they’d been swallowed by Qlogic/SciCortex/Cray ?

  2. PathScale compiler division has remained fairly independent of the parents companies even during the acquisitions and spin-offs over the last few years. When SiCortex went belly last year we took the opportunity to go back into stealth mode and just really get some serious engineering done. We had a lot of work to catch up with the market trends, but I humbly say today marks the turning point when people will stop asking if we’re alive and start seeing us kick *** again.

  3. Excellent! 🙂

    Best of luck with it..

  4. John West says

    Yeah, I was very happy to see PathScale scrabble up out of the rubble of SiCortex (with a little help from Cray, as I recall). Oh, and Christopher, you can say “ass” on insideHPC. You are among friends here 🙂

  5. John – You forget that this could be syndicated… While a quote or two of mine is floating around with profanities I prefer to keep my posts PG13.. (and it was *a lot* of help from Cray which we’re still quite grateful for)

  6. The OpenMP language committee is working on standardizing extensions to the OpenMP language to support programming accelerators, such as GPUs. This directive-based approach is based on a generalization of the initial work done at PGI. The accelerator sub-committee includes representatives from Cray, PGI, IBM, TI, Intel and CAPS. It is expected that the OpenMP 4.0 specification will include support for an accelerator programming model.