Cowboy Supercomputer Powers Research at Oklahoma State

Print Friendly, PDF & Email

Dana Brunson is Director of the High Performance Computing Center

In this video, Dana Brunson from Oklahoma State describes the mission of the Oklahoma High Performance Computing Center. Formed in 2007, the High Performance Computing Center (HPCC) facilitates computational and data-intensive research across a wide variety of disciplines by providing students, faculty and staff with cyberinfrastructure resources, cloud services, education and training, bioinformatics assistance, proposal support and collaboration.

“By placing advanced technology in the hands of the academic population, research can be done more quickly, less expensively, and with greater certainty of success.”

Cowboy is the supercomputer cluster managed by the OSU HPCC and is the largest externally funded supercomputer in the state. Cowboy is comprised of more than 250 individual compute nodes working together (or in parallel) to help researchers solve problems and conduct simulations that a typical desktop computer may not be able to handle on its own. Researchers and scientists from a wide variety of areas including bioinformatics, engineering, physics and geography benefit from the much faster, more powerful computing resources of the OSU HPCC.

The Cowboy cluster, acquired from Advanced Clustering Technologies, consists of:

  • 252 standard compute nodes, each with dual Intel Xeon E5-2620 “Sandy Bridge” hex core 2.0 GHz CPUs, with 32 GB of 1333 MHz RAM and
  • Two “fat nodes” each with 256 GB RAM and an NVIDIA Tesla C2075 card.
  • The aggregate peak speed is 48.8 TFLOPs, with 3048 cores, 8576 GB of RAM.
  • Cowboy also includes 92 TB of globally accessible high performance disk provided by three shelves of Panasas ActivStor12, this includes 20x 2TB drives and peak speed of 1500MB/s read and 1600MB/s write per shelf. The total solution provides an aggregate of 4.5GB/s read and 4.8GB/s write.
  • The interconnect networks are Infiniband for message passing, Gigabit Ethernet for I/O, and an ethernet management network. The Infiniband for message passing is Mellanox Connect-X 3 QDR in a 2:1 oversubscription. There are a total of 15x MIS5025Q switches providing both the leaf and spine components. Each leaf is connects to 24 compute nodes, and 12x 40Gb QDR links to the spine. Point-to-point latency is approx 1 microsecond. The ethernet network includes 11 leaf gigabit switches that connect to 24 compute nodes. Each leaf is uplinked via 2x 10G network ports to the spine 64 port Mellanox MSX1016 10 Gigabit switch. The network configuration provides a 1.2:1 oversubscription.

Dana Brunson is assistant vice president for research cyberinfrastructure, director of the High Performance Computing Center and is an adjunct associate professor in the Computer Science Department and the Mathematics Department at Oklahoma State University (OSU). She earned her Ph.D. in Mathematics at the University of Texas at Austin in 2005 and her M.S. and B.S. in Mathematics from OSU. Dana is co-lead of the OneOklahoma Cyberinfrastructure Initiative (OneOCII) which provides CI resources to academic institutions statewide. Dana also co-leads XSEDE’s Campus Engagement program.

Sign up for our insideHPC Newsletter