AMD’s John Fruehe posted on his blog yesterday that Goethe University is building a GPU-CPU hybrid cluster, but not in the usual (i.e., NVIDIA) way. They are using Mangy Cours processors with AMD’s GPU, which is a much less common choice
This cluster will combine one thousand five hundred and forty four (1,544) 12-core AMD Opteron processors (that’s a total of 18,528 cores) and 772 AMD GPUs. That amounts to one GPU card per 2P node.
To help bring the whole set of servers together into a cohesive cluster, Bright Cluster Manager(tm) software will be utilized and Mellanox quad data rate InfiniBand will serve as the interconnect between the nodes. To help maximize data center space, the cluster will be built on a SuperMicro “twin” platforms that provide 2 motherboards in a 2U rack server chassis. This allows the 772 total motherboards to be located in 386 physical servers. To give you an idea of the size of this cluster, 386 2U servers will fit in about 18 42U racks if you were to put in only the servers (and no networking equipment.)
[John corrected his post, and we subsequently changed our post, following a set of comments from an insideHPC reader who noticed some errors leading to the correction. See the comment stream below for details.] The university is partnering with ClusterVision, who we’ve written about before.
Fruehe posted this under the headline “Fusion for Servers Happening Today,” which of course it isn’t. The point of the fusion project is to put the GPU on the same die as the CPU, and this is very much the standard add-on card approach. Not only is fusion for servers not happening today, AMD won’t even talk about when it might happen. But what can you expect from an AMD marketing guy?