“Cavium ThunderX has significant differentiation in the 64-bit ARM market as Cavium is the first ARMv8 vendor to deliver dual socket support with full ARMv8.1 implementation and significant advantage in CPU cores with 48 cores per socket. In addition, ThunderX supports large memory capacity (512GB per socket, 1TB in a 2S system) with excellent memory bandwidth and low memory latency. In addition, ThunderX includes multiple 10 GbE / 40GbE network interfaces delivering excellent IO throughput. These features enable ThunderX to deliver the core performance & scale out capability that the HPC market requires.”
Today Penguin Computing announced Open Compute Project (OCP)-based systems that reinforce both its continued collaboration with NVIDIA and new options in Penguin Computing’s Magna family of OpenPOWER-based servers. “Customers benefit when we partner with exceptional organizations like NVIDIA, the OpenPOWER Foundation and Open Compute Foundation in developing our systems,” said Jussi Kukkonen, Director Product Management, Penguin Computing. “An essential part of our mission is to provide customers with form factor flexibility, choice of architecture and peak performance, which are all hallmarks of Penguin Computing.”
NNSA’s next-generation Penguin Computing clusters based on Intel SSF are bolstering “capacity” computing capability at the Tri Labs. “With CTS1 installed in April, the NNSA scientists can continue their stewardship research and management on some of the most advanced commodity clusters the Tri Labs have acquired, ensuring the safety, security, and reliability of the nation’s nuclear stockpile.”
Phil Pokorny from Penguin Computing presented this talk at the Open Compute Project Summit. “Tundra ES delivers the advantages of Open Computing in a single, cost-optimized, high-performance architecture. Organizations can integrate a wide variety of compute, accelerator, storage, network, software and cooling architectures in a vanity-free rack and sled solution. This allows them to build optimized Intel CPU, Phi, ARM or NVIDIA systems with the latest Penguin, Intel or Mellanox high-speed network technology for maximum performance.”
Dr. Lewey Anton reports on who’s moving on up in High Peformance Computing. Familiar names in this edition include: Sharan Kalwani, John Lee, Jay Muelhoefer, Brian Sparks, and Ed Turkel. And be sure to let us know of HPC folks in new positions!
Penguin Computing has renewed as a Platinum Member of Open Compute Project (OCP). Leading with the OCP-based Tundra Extreme Scale (ES) Series, Penguin was recently awarded the CTS-1 contract with the NNSA to bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories.
Penguin Computing in Portland is seeking a Python Software Engineer in our Job of the Week.
“CTS-1 shows how the Open Compute and Open Rack design elements can be applied to high-performance computing and deliver similar benefits as its original development for Internet companies,” said Philip Pokorny, Chief Technology Officer, Penguin Computing. “We continue to improve Tundra for both the public and private sectors with exciting new compute and storage models coming in the near future.”
Asetek showcased its full range of RackCDU hot water liquid cooling systems for HPC data centers at SC15 in Austin. On display were early adopting OEMs such as CIARA, Cray, Fujitsu, Format and Penguin. HPC installations from around the world incorporating Asetek RackCDU D2C (Direct-to-Chip) technology were also be featured. In addition, liquid cooling solutions for both current and future high wattage CPUs and GPUs from Intel, Nvidia and OpenPower were on display.
Today Penguin Computing announced that Emerson Network Power is supplying the uniquely-engineered DC power system for Penguin Computing’s Tundra Extreme Scale (ES) series. Emerson Network Power is the world’s leading provider of critical infrastructure for information and communications technology systems. The Tundra ES series delivers the advantages of Open Computing in a single, cost-optimized, high-performance architecture. Organizations can integrate a wide variety of compute, accelerator, storage, network, software and cooling architectures in a vanity-free rack and sled solution.