Sign up for our newsletter and get the latest HPC news and analysis.

Michael Wolfe on what we can (and shouldn't) expect from OpenCL

PGI’s Michael Wolfe has written an interesting article on OpenCL at HPCwire this week that gives his thoughts on what we can — and what we shouldn’t — expect from OpenCL

So, given all the hype, what can we expect from OpenCL? Is it really simple? Is it portable? Will it replace other parallel programming models? It’s still a little early; we’ve seen a multicore demonstration of OpenCL from AMD, a limited developer release from NVIDIA, and Apple is planning to release its next generation operating system in September, including OpenCL support. Yet we can prognosticate, given what we know about the language and related technologies.

I was glad to see him make this point, since I have the same objection when people gush about OpenCL and ascribe to it attributes that it clearly doesn’t have

So, is there a metric by which we can claim that OpenCL is simple? I’m going to give a qualified no. In OpenCL, you have a host and one or more compute devices; you have a host program that launches kernels; you have task parallelism and data parallelism, work-items and work-groups; you have contexts and command queues; you have global memory, local memory, and private memory, not to mention constant memory, with different consistency models across the memory types; you have buffer objects, image objects, and sampler objects; and you have language restrictions and language extensions. It’s not simple.

At least for right now, nothing about parallel programming is simple, because writing parallel programs that perform well (where “perform well” at means means the program runs at least as fast as needed to solve the problem at hand, executes predictably, and gives the right answer) is hard.

On the question of the long term value for OpenCL

My opinion: it’s likely to be more useful as a target language for higher level programming languages, tools, and environments, or as a language to implement optimized libraries, than as a language for a more general programming community.

The article is good, and I recommend you read it, especially if you are a manager who doesn’t actually write code anymore.

Comments

  1. Everyone said that MPI would just be for language and library developers too. Remember how that turned out?

  2. Well it didn’t turn out that way for MPI, but it wouldn’t have been terrible if it had.

    I think that a key difference between then and now may be that it is actually relatively easy to get an MPI program that will run. It is slightly harder to get one to run correctly, and very difficult to get one to run correctly, at scale, in a way that is performance portable. But there is a very gentle slope to get started with it. This is not true (at least from my perspective) of OpenCL (or CUDA for that matter). That pool has no shallow end.

Resource Links: