Cowboy Supercomputer Powers Research at Oklahoma State

Print Friendly, PDF & Email

Dana Brunson, Director of HPC at Oklahoma State

Dana Brunson, Director of HPC at Oklahoma State

In this video, Oklahoma State Director of HPC Dana Brunson describes how the Cowboy supercomputer powers research.

“High performance computing, involves supercomputers for computational and data intensive research, a super computer is actually hundreds to thousands of computers working together to solve problems, bigger than any individual desktop could accomplish. This helps researchers save time, save money and often their sanity. High performance computing is often used for simulations that may be too big, too small, too fast, too slow, too dangerous or too costly, another thing it’s used for involves data. So you may remember the human genome project it took nearly a decade and cost a billion dollars, these sorts of things can now be done over the weekend for under a thousand dollars. Our current super computer is named Cowboy and it was funded by a 2011 National Science Foundation Grant and it has been serving us very well.”

The Cowboy cluster, acquired from Advanced Clustering Technologies, consists of:

  • 252 standard compute nodes, each with dual Intel Xeon E5-2620 “Sandy Bridge” hex core 2.0 GHz CPUs, with 32 GB of 1333 MHz RAM and
  • Two “fat nodes” each with 256 GB RAM and an NVIDIA Tesla C2075 card.
  • The aggregate peak speed is 48.8 TFLOPs, with 3048 cores, 8576 GB of RAM.
  • Cowboy also includes 92 TB of globally accessible high performance disk provided by three shelves of Panasas ActivStor12, this includes 20x 2TB drives and peak speed of 1500MB/s read and 1600MB/s write per shelf. The total solution provides an aggregate of 4.5GB/s read and 4.8GB/s write.

cowboy2

Cowboy interconnect networks are Infiniband for message passing, Gigabit Ethernet for I/O, and an ethernet management network. The Infiniband for message passing is Mellanox Connect-X 3 QDR in a 2:1 oversubscription. There are a total of 15x MIS5025Q switches providing both the leaf and spine components. Each leaf is connects to 24 compute nodes, and 12x 40Gb QDR links to the spine. Point-to-point latency is approx 1 microsecond. The ethernet network includes 11 leaf gigabit switches that connect to 24 compute nodes. Each leaf is uplinked via 2x 10G network ports to the spine 64 port Mellanox MSX1016 10 Gigabit switch. The network configuration provides a 1.2:1 oversubscription.

Last Oklahoma State applied for and got a second National Science Foundation Grant for a new supercomputer to be named Pistol Pete, which they are designing now and hope to deploy later this year.

Sign up for our insideHPC Newsletter