Moscow State Taps T-Platforms to Build 10 Petaflops Super

Print Friendly, PDF & Email

By Timothy Prickett MorganGet more from this author

In what rings as almost an echo of Cold War-era scientific competition, Moscow State University is putting together a supercomputer it hopes will take it back up the international rankings.

Now, MSU has tapped its favorite contractor, T-Platforms, to build a hybrid CPU-GPU machine that will weigh in at 10 petaflops of peak performance and would vault it back towards the top of the HPC hit parade. T-Platforms has built several generations of rack and blade setups for MSU in the past couple of years.

MSU’s current machine, nicknamed “Lomonosov” after the 18th century Russian polymath, is also a ceepie-geepie machine that augments the number-crunching oomph of Xeon x86 processors from Intel with fanless Tesla X2070 GPU coprocessors from Nvidia.

T-Platforms’ T-Blade 2 chassis and blades are among the most cleverly engineered boxes on the market, being able to cram 16 server nodes, each with two Xeon processors and two Tesla coprocessors, into a 7U chassis and not actually melt. (See this story for the full details of the current Lomonosov machine.)

Lomonosov uses quad data rate (QDR) InfiniBand to interconnect the nodes, and the GPUs are lashed to the CPUs (one per socket) through the PCI-Express 2.0 bus in the Intel chipset. It has a peak theoretical performance of 1.37 petaflops, with 510 teraflops coming from a chunk of machines based only on x86 processors – specifically four-core Xeon E5570s and six-core Xeon X5670s.

There are a total of 43,520 cores on this part of the box, which is based on an early T-Blade blade server. This initial Lomonosov machine was augmented with 777 ceepie-geepie T-Blade 2 blade servers, which have a total 6,216 Xeon cores and 1,554 GPUs with a total of 795,648 cores. The GPUs deliver the vast majority of the additional 863 teraflops coming from the hybrid CPU-GPU blades.

While T-Platforms and Moscow State are not being terribly specific about the configuration of Lomonsov’s successor machine, rather than upgrading the existing machine, Moscow State this time around is asking T-Platforms to build a new 10 petaflops cluster based on a dense-pack rack server design, one that Alexey Komkov, vice president of products and technology at T-Platforms, says will include a custom rack design.

The machine will probably look like the rackish-bladish tray servers sold by Hewlett-Packard and Dell to hyperscale data center and HPC customers these days. The custom racks will include warm water cooling on the server nodes, according to Komkov.

T-Platforms has pitched a mix of compute nodes to Moscow State to come up with a 10 petaflopper. One node type will use a mix of either “Sandy Bridge” or “Ivy Bridge” Xeon processors from Intel, most likely two-socket nodes.

The second type of node in the machine will sport Sandy Bridge Xeons (again, very likely the Xeon E5s, due in early 2012) plus Nvidia’s impending “Kepler” next-generation GPU coprocessors (also due in 2012 and also running late like the Xeon E5s). The third node type will mix Sandy Bridge processors and Intel Many Integrate Core (MIC) coprocessors if they are available in 2012 for inclusion in the machines. ®

This article originally appeared in The Register. It appears here in its entirety as part of a cross-publishing agreement.