Sign up for our newsletter and get the latest HPC news and analysis.

UCF Lands Its First Supercomputer

ucfUPDATED: The University of Central Florida has landed its first supercomputer! Thanks to a $2.6million grant from the Army, UCF has purchased a new super from IBM. The new cluster tips the scales with 224 processors, 512GB of memory and 22.2TB of storage. There is also an upgrade planned for this summer to triple the compute and memory capacity.

This is a great opportunity for UCF and the simulation industry,” said Michael R. Macedonia, general manager of the Orlando unit of Forterra Systems Inc., a high-tech partner with the college. “People need to understand how important it is to have a supercomputer of this class in Central Florida. This will allow UCF to press the limits of science and attract new business to the region as well.”

The initial compute workloads will include defense simulation training [hence the Army dollars attached]. Future workloads will include medical simulation, civil engineering and nanoscience.

For more info on the new super, read the full article here.

Comments

  1. I’d love to know more about this system… 192 processors and 20TB of RAM. If you consider each ‘processor’ to be a core from an Intel Xeon operating at 2.667 Ghz, that’s 2TF (peak) right there. In fact, I don’t think it can be POWER unless they’re going with POWER5, but I can’t see IBM wanting to sell that when they’ve got POWER6 systems to market. POWER6, even at the ‘slow’ 4.2 Ghz speed, gives over 16GF/core, which would mean that the 2.0 TF is too low, providing they’re speaking about peak.

    So, assuming it IS an Intel Xeon (or conceivably an AMD system), that means they’re packing more than 100GB per core to get that 20TB total. Intel has some dual-core chips still, so let’s use those, and we’ve got 48 nodes with 4 cores per node and 416+ GB per node. Let’s say some (or all) of those nodes have 512GB, and that means that we’re talking 16 GB DIMMs x 32 slots per motherboard. But I don’t think 16 GB DIMMs exist yet, so what’s IBM doing?

    (Ok, another idea could be using 2.8 Ghz dual-core Opterons, counting each processor AS a processor, and giving 2.15 TF of performance, meaning a more manageable 50+ GB/core, and putting only 128GB on each system. That sounds more sensible from an ‘OK, we can do it’ perspective, but I’m still surprised at the memory/compute ratio. Having seen advertisements for 10TF for

  2. [Whoops! I wrote so much, it chopped my comment... ]

    … (for) less than $1M lately, 2 TF for $2.6M raises eyebrows.)

    Those are my guesses.. sorry for babbling. I don’t suppose you’ll find out more from the UCF people and post an update? :-)

    Cheers,
    – Brian, who obviously has too much free time today

  3. John Leidel says:

    I’ll see what I can dig up :-)

  4. John Leidel says:

    Brian, the quoted article was a bit incorrect regarding the system statistics. I went out and found their website and have subsequently corrected my post. Their website is also linked within the post if you’re interested in contacting the folks at UCF.

Resource Links: