Today Nvidia announced that the hybrid Eurora supercomputer at Cineca in Italy has set a new record for data center energy efficiency using Kepler GPUs. Built by Eurotech, the hot water-cooled Eurora system reached 3,150 megaflops per watt of sustained performance, which is 26 percent better than the top system on the most recent Green500 list.
Advanced computer simulations that enable scientists to discover new phenomena and test hypotheses require massive amounts of performance, which can consume a lot of power,” said Sanzio Bassini, director of HPC department at Cineca. “Equipped with the ultra-efficient Aurora system and NVIDIA GPU accelerators, Eurora will give European researchers the computing muscle to study all types of physical and biological systems, while allowing us to keep data center power consumption and costs in check.”
Pairing NVIDIA Tesla K20 GPUs with Eurotech’s Aurora Hot Water Cooling technology, the Eurora system is more efficient and compact than conventional air-cooled solutions. HPC systems based on the Eurora hardware architecture, including the Eurotech Aurora Tigon, enable data centers to potentially reduce energy bills by up to 50 percent and reduce total cost of ownership by 30-50 percent.
In this video from ISC’12 in Hamburg, Giovanbattista Mattiussi from Eurotech describes the company’s prototype liquid-cooled Eurora GPU technology.
With the SC12 Student Cluster Competition fast approaching, the Radio Free HPC team looks back at ISC’12 and their first-ever Student Cluster Challenge. Along the way, they discuss how the cluster challenge makes participating students a hot commodity on the job market, while Dan Olds proposes a joint challenge to crown a world student cluster champion.
In this video, James Reinders from Intel describes the company’s pending Xeon Phi co-processors and how they provide programmers with easy access to parallelism while preserving compatibility.
Last November, we demonstrated our first silicon of the Intel Xeon Phi coprocessor, code named “Knights Corner”. It produced an astounding teraflop of performance in a processor the size of your thumb, setting the industry on notice of the potential of many core architectures and providing a clear path of how we’ll get to the Petascale and Exascale era. This is the same amount of performance as the number 1 supercomputer on the TOP500 list in 1997, dubbed ASCI Red. ASCI Red used thousands of processors and filled a room with cabinets to produce the same amount of performance. Knights Corner quickly got the nickname of “Supercomputer on a Chip”.
We had a terrific time at ISC’12 filming dozens of videos with key HPC vendors. In case you missed them, here are the featured programs:
Steering HPC Cluster Jobs with the Altair PBS Pro Workload Manager. In this video, HP’s Ed Turkel discusses the importance of robust workload management software for HPC clusters. The company partners with Altair to package PBS Professional workload management software with its systems so that customers can get up and running their applications easily.
IBM Blue Gene/Q and iDataPlex at ISC’12. In this video, Jay Muelhoefer, Worldwide Marketing Executive at IBM describes the innovated Blue Gene/Q technology that powers the Sequoia supercomputer. Sequoia came in at #1 on the TOP500 with 16.3 Petaflops of performance.
Interview: SGI’s Big Brain Supercomputer Enables Scientific Discovery. In this video, Bill Mannel from SGI discusses the new “Big Brain” supercomputer called the SGI Altix UV 2. He also describes the company’s new Sandy Bridge clusters solutions, TOP500 results, and a recent win in Finland for a container-based supercomputer. Recorded at ISC’12 in Hamburg.
Xyratex Commitment to Lustre and OpenSFS. In this video, Torben Kling-Petersen from Xyratex discusses the company’s commitment to the Lustre file system and why it is important for them to be a Promoter-level member of the OpenSFS community.
Mike Stolz on the new ClusterStor 6000 for HPC. In this video, Mike Stolz from Xyratex describes the company’s new ClusterStor 6000 product and why Xyratex is leading the way in HPC storage with performance without sacrificing reliability.
Demand for High Performance Computing is growing in both the public and private sectors. It is also highly energy-intensive. The Federal government is required by the Energy Independence and Security Act of 2007 (EISA) to reduce energy intensity in all facilities, including laboratories and industrial buildings, by 30% by 2015. The increasing need for HPC and the attendant energy intensity threatens to derail the progress toward this and other goals. Through meeting mandated energy reductions, the Federal government is poised to lead by example in energy efficiency.
In this video, Jun Liu from Inspur describes the company’s advanced server technologies. As the largest server company in China, Inspur components power the Tianhe-1A and Sunway Bluelight supercomputers.
In this video, Michael Wolfe from The Portland Group discusses recent tutorials on OpenACC, the glut of available HPC architectures, and what the advent of 1.5-million-core systems means to programmers.
In this video, Giovanbattista Mattiussi from Eurotech discusses the company’s prototype liquid-cooled Eurora GPU technology.
The new Eurora prototype system will be available to scientists in Europe, and will represent the first step for the creation of a ‘Datacentric Exascale Lab’ in Italy,” says Sanzio Bassini, CINECA HPC Associate director.
The first of the major awards was for the highest LINPACK score,” said Dan Olds from Gabriel Consulting. “It’s amazing how much difference six months makes when it comes to computer hardware. At the SC11 Student Cluster Competition in Seattle, six out of the eight teams broke the Teraflop barrier with scores ranging from 1.127 to Team Russia’s GPU-fueled high score of 1.926. And the year before, at SC10 in New Orleans, only three teams topped a Teraflop – barely. But now, a mere six months later, all five ISC Student Cluster Challenge teams turned in Teraflop+ LINPACK scores. In fact, the lowest score this year would have finished in the top three or four last year in Seattle. The high score, a stunning 2.651 TF/s, was turned in by China’s NUDT team. Their GPU-laden configuration paid off when it came to LINPACK: they left the rest of the field in the dust.
In this video, Mellanox VP of Market Development Gilad Shainer discusses new product developments and how InfiniBand has grown to dominate the TOP500.
InfiniBand becoming the most used interconnect on the TOP500 is a significant milestone and achievement for Mellanox. We believe InfiniBand surpassing Ethernet in high-performance computing is a forward-looking sign that it will also become the interconnect of choice for cloud and Web 2.0 data centers, as they are all based on similar architecture concepts,” said Eyal Waldman, president, chairman and CEO of Mellanox Technologies. “With the majority of the world’s Petaflop systems, as well as the top two most efficient systems on the list, Mellanox FDR 56Gb/s InfiniBand and 10/40GbE interconnect solutions with PCI Express 3.0 provide the best return-on-investment with leading system efficiency without sacrificing performance.”
For product announcements at ISC’12, Mellanox rolled out ConnectIB, a foundation for Exascale interconnect technology that delivers throughput of 100Gb/s utilizing PCI Express 3.0 x16. Read the Full Story.
In this video, Jay Muelhoefer, Worldwide Marketing Executive at IBM discusses how Platform Computing has transitioned into its new role in shaping technical computing at Big Blue.
The new IBM Platform Cluster Manager enables clients to self-provision clusters in minutes and automatically, dynamically manage cluster environments that include both IBM Platform Computing and non-IBM workload managers.
Recorded at ISC’12 in Hamburg. Read the Full Story on IBM’s new initiatives in HPC and Big Data.
With a little help from the University of Hamburg, the good folks at ISC’12 have posted videos from their Hot Seat vendor sessions. These six-minute spots are a great way to get the latest on your favorite HPC providers.