AMD Fellow: acceleration makes more sense than manycore

Print Friendly, PDF & Email

Ashlee Vance has written over at The Register about remarks made by Chuck Moore, Senior Fellow at Advanced Micro Devices and currently the Chief Engineer of AMD’s next-generation processor design, while speaking to students at Stanford on June 4. Here’s the quote from Vance’s piece

Overall, Moore argued that these heterogeneous machines with x86 and GPU processors will make more sense moving forward than the so-called many-cored chips that the likes of Sun and Intel are pursuing where software is spread across tens or even hundreds of similar cores.

I immediately gravitated to this article because Chuck is obviously a smart guy, and the points in this article largely jibe with many of the points I made in a presentation late last year about what HPC would look like in 2017 for the DoD. The only guy interacting with me during that talk spent most of his energy quota for the month eviscerating my point of view and in general acting as though I was a moron. Which I might be, but that sort of behavior in public just isn’t cricket (as the English say).

My points were primarily around the practicality of designing working chips. From Vance’s piece

Like others, Moore argued that we’ll soon run into a major software issue, as too few applications will be able to deal with many-cored chips. Things look okay with two, four and even eight core chips, but we’re in real trouble after that.

Moore’s vision of the future

His “throughput machine” would include a number of Opteron chips up front to handle existing software and to crunch through single-threaded code. Then, you combine the Opterons with “a large number of small, power-efficient, domain optimized compute offload engines.”

Moore spent some time specifically addressing what he sees as real problems with the Cell processor, much in the news recently for its role in the RoadRunner PFLOPS milestone.

IBM’s Cell chip will struggle to woo server customers looking to turbo charge certain applications because the part has a fundamental design flaw, according to…Chuck Moore.

Sure, sure. Cell is a multimedia throughput dynamo and its SPEs (Synergistic Processing Elements) are just lovely. “But something happened on the way to the ranch,” Moore said, speaking this week to a group of Stanford students. “You have to get going first on the PowerPC chip (inside Cell), and the PowerPC core is too weak to act as the central controller.”

The whole article is interesting; I recommend a read.

Comments

  1. I’m not smart enough to predict what the future will be and therefore I can not say if the many core approach will succeed or not.

    But one point I’m agree with C. Moore is on the Cell, the weakest part of the Cell is the PPU, this is a nightmare, I think it was good in 2004 but now in 2008 it is a big bottleneck.

    And it is not a detail that IBM RoadRunner use a important number of Opterons I think in this way they will probably bypass the PPU for most of the computations.

  2. I would also have to agree with John West and Chuck Moore. It simply makes more sense to place a series of operations on a portion of the compute platform that is the most efficient for *those* operations [ie, scalar v. vector]. I don’t understand the persuasion behind cramming more cores in the same footprint in order to achieve “more scalar flops/square inch.” Granted, this mantra make hold true for the enterprise server market. “I want to run more concurrent copies of X on my server” is driving this to some extent.
    The only issue I have with Moore’s overarching statements is in regard to software. I honestly believe that both multi-core and hybrid compute solutions have and will continue to have a software problem if we don’t stop and think critically about the issues. Ultimately, users will not accept the the current state of the market in hybrid computing: use my external cross compiler and API to achieve speedup. They will also not accept the fact that OpenMP will solve all their problems on multi-core platforms.

  3. Joe Quinlan says

    John, this is a good article. Have you thought of trying to get insidehpc.com included as a Google News source? I think more and more people are relying on targeted RSS feeds from Google News, so this might increase your readership. An interesting web page on this subject is at…

    http://www.askdavetaylor.com/become_google_news_gnews_source.html

  4. Joe – thanks! I haven’t looked into it yet, but I’ll explore the link you sent along.