In this video from the 2014 HPC Advisory Council Europe Conference, Rich Graham from Mellanox presents: Scalable HPC Communication Capabilities.
Mike Bernhardt from Intel writes that the company will continue to demonstrate an “unswerving commitment to HPC” at next week’s International Supercomputing Conference. “If you want to keep up with where HPC is going, be sure to catch as many of the Intel presentations as you can fit into your calendar. They’ll be pretty hard to miss.”
“With Fabric Integration, you pick up five value vectors. One is an in increase in performance; so the closer you can drive the fabric to the CPU, the more things you can do to increase the overall performance of both the CPU, and the fabric together. Number two, you pick up density. Because now you’re not taking up any board space or PCIe slots and things like that. Number three, you pick up also the options for improved value, in terms of price per performance. Number four, you reduce power. And number five, by getting rid of things like the PCIe bus, you reduce componentry – which again reduces power – as well as improves reliability.”
The basic HPC cluster consists of at least one management/login node connected to a network of many worker nodes. Depending on the size of the cluster, there may be multiple management nodes used to run cluster-wide services, such as monitoring, workflow, and storage services. This insideHPC article series looks at the Five Essential Strategies for Managing HPC Clusters.
“We really need to re-look at what the requirements are that will lead us all the way up to being able to support Exascale deployments. One of these absolute requirements is CPU fabric integration, because the performance that’s needed, the density, the power, are all areas that have to be vastly improved to support deployments of exascale.”