How Computer Games Help HPC

Print Friendly, PDF & Email

In this special guest feature, Tom Wilkie from Scientific Computing World clears his head from all the technical information gathered at yesterday’s GTC2012 keynote.

The latest processor from Nvidia will lead to ‘the democratisation of computing happening in front of us,’ according to Jen-Hsun Huang, president and chief executive of the company.

He unveiled the new chip, known as ‘Kepler’, to an audience of nearly 3,000 scientists and engineers at Nvidia’s GPU Technology Conference in San Jose, California, on 15 May. It was, he said, more than three times as energy efficient as its predecessor.

Nvidia specialises in the graphics processing units, as one of the major suppliers of computer graphics cards to PCs but the technology is now widely used as an accelerator in high performance computers. Kepler was, he said, the most energy efficient GPU ever built and he expected it to advance high-performance computing, computer graphics and cloud computing. In HPC, he said, ‘We know that ultimate performance is limited by energy efficiency and at the chip architecture level we have had to design for energy efficiency and this is a huge step forward.’

httpv://www.youtube.com/watch?v=MNbmpVVhfJw

Among the applications in HPC that he demonstrated was a massive simulation of the collision between our own galaxy, the Milky Way, and the nearby Andromeda galaxy – an event expected some three billion years or so into the future. The simulation involved a many-body problem of millions of gravitationally interacting stars – a highly intensive computational problem.

But according to Sumit Gupta, head of Nvidia’s Tesla high-performance computing business, supercomputing will be the beneficiary of the other applications for the Kepler chip – in gaming, virtualisation and cloud computing. It is because Nvidia has such a strong presence in these high-volume consumer markets that it is able to produce its processors so cheaply. And it is this aspect, according to Gupta that is leading to the ‘democratisation of high performance computing’ proclaimed by Huang.

‘With the same GPU,’ Gupta said, ‘we can go into many different markets Cloud gaming will be a huge market – we are able to leverage all of these high volume markets and get into HPC at a price point other people cannot.’

Nvidia is launching two versions of the processor: one is available almost immediately that will have single precision and will be suitable for some scientific applications such as seismic profiling. The other, known as K20, will have double precision and enhanced queuing and parallelism but it will not be available until the last quarter of this year.

He pointed out that ‘with Kepler you can build a petaflop system, in just ten racks of servers. Two years ago, Tokyo Tech built a petaflop machine with Fermi [the predecessor to Kepler] and it took them 42 racks.’ To build a machine of similar performance based on Intel’s Sandybridge processor, would take about 100 racks of servers. ‘So Kepler is 10 times better than Sandybridge in terms of petaflops,’ he claimed. He also said that there would be a tenfold improvement in power consumption, with a 1 petaflop Kepler-based machine consuming just 400 kW as opposed to around 3MW with Sandybridge.

‘A petaflop machine of this size means that every university in the world can put one in,’ he said. He estimated that it would cost less than $4M for a petaflop machine, whereas in the recent past people have spent $30M to $40M to get the same performance. ‘There are universities out there that consume 400kW with a 10 rack system but they only get 20 teraflops, so they have this outlay but they are getting a twentieth of what they could be getting.’

But Gupta promised that Kepler was only one step along the road. Although, he said, ‘from my perspective, Kepler is a bigger shift than we have ever done before – much more revolutionary – there is so much innovation for us still to do. It’s a long road.’

This story originally appeared on HPC Projects. It appears here as part of a cross-publishing agreement with Scientific Computing World.

Comments

  1. FruitVendor says

    Hey guys, let’s compare apples to oranges!

    Massive parallelism is not the aim of any of Intel’s core products. That’s why they have a separate MIC architecture. Of course you can fool a lot of people with an article like this, but perhaps you should understand what you want to print before you print it.