At Sun’s Partner Summit on Tuesday the company announced a slew of new products for HPC. The head of Sun’s HPC sales organization, Marc Hamilton, hits some performance points in a light post that you can read here, and Sun’s own press release is here.
Sun’s scalable new HPC systems, including the next-generation Sun Constellation System, are expected to power some of the world’s leading HPC facilities, and can address a broad range of HPC applications requiring high performance, high throughput, large memory and fast I/O.
Sun announced new flash-ready Nehalem-based HPC blades, a set of rack servers that upgraded existing lines with Nehalem, a cooling door, new IB networking options, and the Sun Lustre Storage System — “a complete Lustre and Open Storage hardware solution.”
The high end new blade and chassis create quite a dense solution
With the Sun Blade 6048 chassis, the Sun Blade X6275 server module also provides extreme density with 48 physical blades per rack — supporting 96 nodes of two-socket, quad-core processors per node, resulting in a total of 768 processor cores and nine teraFLOPS of peak performance in a single 42U rack. This represents up to 71 percent more cores/rack than IBM (as compared to the IBM BladeCenter H) and 50 percent more cores/rack than HP. With 2.25 teraFLOPS of peak compute capacity and Linpack efficiency of 89 percent per every shelf of 12 blades, customers can expect up to two teraFLOPS of actual computational power.
As both HPCwire and The Register point out, the 6275 is pretty innovative, cramming two nodes onto a single blade. Good stuff, although that’s basically the same total peak FLOPS you’d get out of Sun gear stuffed with Opterons, and less memory per node. Each half of the 6275 only supports 12 memory slots, not the full 18 that the Nehalems are capable of supporting. (Video of the X6275 here courtesy of Sun’s HPC Watercooler).
The 6275 also has a less dense little sister, the 6270, which supports half the sockets, and lacks the on board IB (dual GbE ports instead), but does support the full 18 DIMM slots. And while both blades support flash memory, the 6275 supports 2 24 GB modules while the 6270 supports a single 16 GB module.
The 6048 chassis also supports the Sun Blade 6048 Quad Data Rate InfiniBand Switched Network Express Module
Each node on a Sun Blade X6275 server module offers onboard QDR IB HCAs that interface directly with the integrated Sun Blade 6048 IB QDR Switched NEMs in the Sun Blade 6048 chassis, and the Sun Blade 6048 IB QDR Switched NEMs directly connect to Sun Datacenter IB switches in high-bandwidth fat-tree topologies or to other Sun Blade 6048 IB QDR Switched NEMs in low-cost 3D torus configurations.
With the QDR IB NEM you can connect all 96 nodes in a cabinet without external hardware, in case you need a really high performance but compact system.
Sun also has a Dual Port 4x QDR PCIe ExpressModule Host Channel Adaptor for clusters with multiple communications fabrics, as well as the Sun Blade 6000 Virtualized Multi Fabric 10GbE NEM which Sun claims will offer buyers a 20 to 1 cabling reduction. That’s a lot of high bandwidth networking options, which fits well with Sun’s high end target for this blade. The 6275 also supports flash (from HPCwire’s feature)
A SATA interface is also available to connect to an optional Sun flash module, which offers 24 GB of high performance storage per node. It’s designed for users interested in saving state, having a scratch data area, or booting an OS. Since the flash module is hooked up to a SATA controller, to the apps it looks like a hard drive.
In addition, the Sun Lustre Storage System “will enable customers to scale online capacity from 48 terabytes to multiple petabytes, and scale I/O performance from 1 GB per second to more than 100 GB per second.”
Interesting stuff, with a lot of focus on the high end, but despite the DoD Mod Program’s example, I think I’d wait for the dust to settle before I put tens of millions of dollars into a super from a company that wasn’t sure it wanted to be in business.