Scaling the Supercomputing Energy Wall with GPUs

Our post this week on the KQED podcast got Nvidia’s Sumit Gupta thinking that there was something missing from the puzzle:

“Here at NVIDIA, we’ve been working on a solution to the supercomputing power crisis for several years. Supercomputers can use NVIDIA Tesla GPUs to dramatically accelerate supercomputing applications. Like a turbocharger on your car, GPUs kick in to boost your standard Intel or AMD CPUs when you need the extra oomph. Using GPUs is a much more energy efficient way of supercomputing. You choose the right processor to the do the right job. When I edit pictures of my kids, for example, my computer’s sequential Intel or AMD x86 CPU is used to access the hard disk, retrieve the file, and open it.  Once the picture is open, and I want to do red-eye reduction or remove the blur, the GPU kicks into gear to accelerate the job.

Three of the Top Five supercomputers in the world are accelerated by NVIDIA Tesla GPUs. One of these is the Tsubame 2.0 system at the Tokyo Institute of Technology. Like the Hopper system at LBNL, it delivers 1 petaflop per second of performance. But thanks to its GPUs, it consumes less than half the power of the Hopper system.  To be exact, Tsubame achieves 1.19 Petaflop/sec and sips a “mere” 1.4 megawatts of electricity.

Read the Full Story.

Comments

  1. Tres de los cinco mejores superordenadores del mundo se ven acelerados por las GPUs NVIDIA Tesla. Uno de ellos es el sistema de Tsubame 2.0 del Instituto de Tecnología de Tokio. Al igual que el sistema de Hopper en LBNL, ofrece un segundo petaflop por el rendimiento. Pero gracias a su GPU, que consume menos de la mitad de la potencia del sistema de Hopper. Para ser exactos, Tsubame alcanza 1,19 Petaflop / s, y bebe una “simple” 1,4 megavatios de electricidad.

Resource Links: