Rattner talks Exascale at ISC in Dresden

Print Friendly, PDF & Email

Intel’s Justin Rattner, speaking as one of the keynotes at the ISC event in Dresden, presented Intel’s vision for Exascale computing and their work towards production class Petascale computing. Rattner started by highlighting the real world applications that represent the market for exascale beyond just scientific computing. Then, building from Intel’s Terascale research program, he headed off towards Exaflops.

Taking the hypothetical example of a 13mm die using a 22nm process (this puts 4 billion transistors on a die), a power budget of 100W, and 48MB of cache, Rattner discussed possible application options with either 12, 48 or 144 cores. He was able to show a broad range of applications deriving considerable value from very large-scale many-core architectures. Rattner even showed Intel’s thinking at 16nm (24,96, or 288 cores). This was followed by an overview of the experiments around packaging, memory, software stack, and programming under the Terascale program, and the lessons that Intel has learnt for its many-core future.

Then Ct (Intel’s new throughput threaded language) was presented, with a notable attention to vector data types – and indeed vector/SIMD execution units. Rattner also presented this as emerging from their Terascale research program.

Next up was the hot (even controversial, in his own words) topic of Larrabee. It will be composed of many x86 compatible cores each with a vector unit with an on-die interconnection network and shared cache, with on-die I/O and memory interfaces, and some on-die fixed function logic (presumably dedicated hardware acceleration for the usual list of things like cryptographic functions). He talked about the collision course between increasingly many-core CPUs and the increasing generality of GPUs, although denying the ‘battle for control’ of the press hype.

Intel’s firm commitment to making parallel computing pervasive was covered, and two already announced HPC news items – the SGI, Intel, NASA collaboration, and the Cray and Intel deal – were mentioned. Rattner then listed CEA, Julich, LRZ as examples of European partners that Intel are talking with on Petascale technologies.

Finally, Rattner extrapolated towards Exascale. One interesting piece of this was a chart showing an outline power budget for a Exaflops machine using predictable technologies – with compute components consuming 70MW, 80MW for memory, 70MW for communications, 10MW for disk – and a over a hundred MW more for ‘unknowns’ – leading to the GW range for the complete Exaflops system.

Comments

  1. Science writer says

    You folks need some serious copyediting. Even a simple proofread would offer a great improvement. For example, what exactly does this mean? “…he discussed the applications options with either 12, 48 or 144 cores, and concluding the broader range of applications vale deriving from many-core architectures.”

    First, if you parse the sentence out, you have written the following: He discussed the applications’ options with either 12, 48 or 144 cores. He concluding the broader range of applications vale deriving form many-core architecture.

    The verb should be parallel to the noun: “he…concluded” I can’t image what is meant by “vale deriving form.”

  2. Science Writer – actually, you are 100% right. That piece was written by one of our casual contributors. We were (and are) very glad to have his thoughts on the meeting since we couldn’t attend, but we clearly should have done our readers the courtesy of a more careful review. This is a mistake that I think we are less likely to make today (this post is over a year old and we have evolved), but I it will probably still happen from time to time. I appreciate you taking the time to point out our shortcoming; I’ve made corrections that I think improve the readability without doing a total rewrite (archival integrity problem).

    Thanks again for leaving your comment.