Cisco's Unified Computing Solution: an HPC angle?

Print Friendly, PDF & Email

Michael Feldman and I co-authored a piece last week at HPCwire on Cisco’s new Unified Computing Solution. A lot has been written about UCS in the trade press, and so Michael and I touch only briefly on the stuff everyone else is covering. Instead, he and I focus on the potential HPC angle in this announcement — an angle that Cisco was quick to emphasize in our conversation with them.

The benefits to HPC center around workflow management (and the willingness of 3rd parties to develop software that builds upon UCS’ interface) and big memory

And here is where the company starts to talk about high performance computing. For example, for applications that want to live in really large compute grids — as in thousands of nodes — the XML API will provide the mechanism to manage these super-sized systems as a single entity. According to him, “literally anything you can do in our CLI and GUI, you can do in our XML API, and that’s very attractive to system management companies and people who might do things like job scheduling.” For example, third-party developers like Platform Computing could come in and employ the XML API to build higher levels of abstraction around user workload management and application-tailored deployment.

…Returning to the server hardware, the one feature Cisco did reveal this week that pertains to HPC is the memory expansion technology. The feature will be cooked into the blade motherboards and will provide for significantly more memory capacity per server, making it ideal for virtualization and memory-bound applications. Although Schwartz couldn’t provide any details ahead of the Intel Nehalem EP launch, which is expected at the end of the month, he did say that the technology will be “ideal for large data-intensive workloads,” adding that Cisco has been talking with a number of people under NDA who are very interested in these large memory footprint systems.

I wasn’t strongly swayed by either of these points on the HPC usefulness of UCS. I do think there is potential for the API thing to provide benefits to customers who were going to pick UCS anyway, but I don’t see it swinging a buy decision that was going somewhere else. Of course that could all change if some really enterprising company out there hits the ball out of the park with a new tool, but even if that does happen it will take some time, and real gear in the field for people to develop on.

I think the expanded memory feature may have some short term benefits, at least until every other vendor comes up with their own way to offer very large memory arrays on Nehalem.

In the medium term, though, I’m very interested in the potential of flash-based SSDs, not as a hard disk replacement, but as a new first-class member of the dataflow hierarchy. I believe there is potential for flash SSDs to bring balance back to HPC systems by reducing the need for architects to amass large quantities of DRAM for capacity and large numbers of spindles for bandwidth. If this pans out, then very large DRAM banks will become relevant to only a handful of applications, limiting the usefulness of this UCS feature in HPC.

Trackbacks

  1. […] hasn’t announced what it intends to do with its new pet, coming as it does on the heals of Cisco’s Unified Computing Solution announcement, one has to speculate that it will bake these new services into the […]

  2. […] given their Unified Computing Platform (codenamed California, covered here), this move could make sense as a lever into HPC, but its not clear that Cisco wants to get into […]