Do Theoretical FLOPS Matter for Real Application Performance?

Print Friendly, PDF & Email

httpv://www.youtube.com/watch?v=9-2Ah5QZLxc

In this video, Josh Mora from AMD presents: Do Theoretical FLOPS Matter for Real Application Performance? Recorded at the HPC Advisory Council Spain Workshop 2012 in Malaga.

Do Theoretical FLOPS Matter for Real Application Performance? The most intelligent answer to this question is: “it depends on the application.” To validate it experimentally, a modified AMD processor named Fangio (AMD Opteron 6275 Processor) will be used which has limited floating-point capability to 2 FLOPs/clk/BD unit, delivering less (-8% in average) but close to the performance of AMD 6276 Processor with 4 times more floating-point capability, ie 8 FLOPs/clk/BD unit.

The intention of this work is:

  1. Demonstrate that the FLOPs/clk/core of microprocessor architectures isn’t necessarily a good performance metric indicator, despite its heavy use by the industry.
  2. To expose that code vectorization technology of compilers is fundamental in order to extract as much real application performance as possible – but it has a long way to go.
  3. It would not be fair to exclusively blame compiler technology; algorithms are not well designed and written for the compilers to exploit vector instructions (ie SSE, AVX, and FMA).

Download the slides (PDF).

Comments

  1. Yes they do matter in theory because if in theory you design them then in the practical they have dealt with as how to master them