Video: Building the Owens Cluster at OSC

Print Friendly, PDF & Email

img_1947In this time-lapse video, engineers build the Owens cluster at the Ohio Supercomputing Center.

Named after Olympic track star Jesse Owens, the new Owens Cluster is be powered by Dell PowerEdge servers featuring the new Intel Xeon processor E5-2600 v4 product family, include storage components manufactured by DDN and an EDR interconnect provided by Mellanox. The center earlier had acquired NetApp software and hardware for home directory storage.

Our newest supercomputer system is the most powerful that the Center has ever run,” ODHE Chancellor John Carey said in a recent letter to Owens’ daughters. “As such, I thought it fitting to name it for your father, who symbolizes speed, integrity and, most significantly for me, compassion as embodied by his tireless work to help youths overcome obstacles to their future success. As a first-generation college graduate, I can relate personally to the value of mentors in the lives of those students.”

Carey announced in February that the new system will increase the center’s total computing capacity by a factor of four and its storage capacity by three. Owens was chosen from a list of esteemed finalists that included Nobel Prize winners, famous inventors, talented musicians, well-known industrialists and a former president.

“We are touched and honored to have this supercomputer named for our father,” said Marlene Owens Rankin, the youngest daughter of Owens and his wife, Minnie Ruth Solomon. Rankin and her sisters Gloria Owens Hemphill and Beverly Owens Prather founded The Jesse Owens Foundation to perpetuate the ideals and life’s work of their father. “The learning opportunity provided by this expanded capacity will be invaluable to Ohio students.”

System specifications:

  • 824 Dell Nodes
  • Dense Compute
    • 648 compute nodes (Dell PowerEdge C6320 two-socket servers with Intel Xeon E5-2680 v4 (Broadwell, 14 cores, 2.40GHz) processors, 128GB memory)

  • GPU Compute (not yet available)

    • 160 ‘GPU ready’ compute nodes (Dell PowerEdge R730 two-socket servers with Intel Xeon E5-2680 v4 (Broadwell, 14 cores, 2.40GHz) processors, 128GB memory) – we’ll be adding NVIDIA’s next gen ‘Pascal’ GPUs when they come out much later this year

  • Analytics

    • 16 big memory (Dell PowerEdge R930 four-socket server with Intel Xeon E5-4830 v3 (Haswell 12 core, 2.10GHz) processors, 1,536GB memory, 12 x 2TB drives)

  • 23,392 total cores
    • 28 cores/node  & 128 gigabytes of memory/node
  • Mellanox EDR (100Gbps) Infiniband networking
  • Theoretical system peak performance
    • ~750 teraflops (CPU only)

Sign up for our insideHPC Newsletter