Cray J. Henry: HPC and commmodity computing finally share common goal via multicore

Print Friendly, PDF & Email

Cray J. Henry (Director of the Department of Defense High Performance Computing Modernization Program) has an article at GCN discussing how multicore, or specifically the software implications of multicore, are finally bringing a shared goal to supercomputing and commodity computing. He concludes:

“As the computing community struggles with this latest transition, we’re finally at a point where HPC and commodity computing have more than shared chips in common. The trick will be working together to take the best of what we know works on a large scale, avoid trying the techniques we already know don’t work, and get a solution faster that benefits us all.”

He argues that this is a result of the fact that parallel programming is still hard:

“The problem, of course, is the software. How can software developers create applications that can use all of the cores efficiently on behalf of the user? When the clock speeds were going up, the same old programs ran faster, usually with no effort on the part of the software developer. But as cores are added to processors at the same clock speed, software has to be adjusted to take advantage of the new capability. The challenge of writing parallel software has been the key issue for the computational science and supercomputing community for the last 20 years. There is no easy answer; creating parallel software applications is difficult and time consuming.”

Comments

  1. hi, andar here, i just read your post. i like very much. agree to you, sir.