The University of Houston (UH) is adding a new, state-of-the-art supercomputer to its arsenal of research tools. With 1860 compute cores, the new Opuntia cluster will be used primarily for scientific and engineering work.
The acquisition of this new system marks the start of a new era of supercomputing not only for the University of Houston, but also for the surrounding community,” said Rathindra Bose, vice president for research and technology transfer at UH. “With this new system, we are on our way to becoming Houston’s primary source for supercomputing resources and expertise. The new system will allow us to conduct research in a variety of fields.”
The Center for Advanced Computing and Data Systems (CACDS) provides high performance computing resources and related services to enhance research and education at the University of Houston. As the central source for high-performance computing expertise and facilities at UH, CACDS offers resources and services to researchers whose tasks require significant amounts of computing and large-scale data analysis and visualization. CACDS also provides training in different aspects of high-performance computing for UH faculty, staff and students to enable them to effectively use high-performance computing in their ongoing research
Opuntia is a new shared campus resource provided by the CACDS. It is able to provide more than 15 Million SUs per year, targeting large scale parallel jobs from the UH research community. Opuntia contains 1,860 cores within 80 HP Proliant SL 230 compute blades (nodes), 2 HP Proliant SL 250 Xeon Phi blades and 2 HP Proliant SL 250 Nvidia K40 GPGPU blades, 1 HP Proliant DL 380 login node. The system is also equipped with 3 large memory nodes – 1 HP Proliant DL 580 with 1 TB of main memory and 2 HP DL 560 each with 512 GB of main memory. Each compute node has 64 GB of memory, and the login/development node have 64 GB. The system storage includes a 384 TB shared file system, and 85 TB of local compute-node disk space (~1 TB/node). Opuntia also provides access to five large memory (1TB) nodes, and eight nodes containing two NVIDIA GPU’s, giving users access to high-throughput computing and remote visualization capabilities respectively. A 56 Gb/s Ethernet Mellanox switch fabric interconnects the nodes (I/O and compute).The cluster currently runs Rocks 6.1 and Red Hat Enterprise Linux 6.6.