Today Penguin Computing announced that it is delivering an energy-efficient, HPC cluster to the University of Alaska Fairbanks. The new cluster, based on Penguin Computing’s Relion server family was first delivered in April 2016 and has been incrementally expanding throughout the year. The cluster was named Chinook in honor of deceased UAF employee Kevin Engle, who was known for his passion for salmon and Alaska. Engle was a research programmer and ground station manager at UAF’s Geographic Information Network of Alaska.
Penguin Computing, a provider of high performance, enterprise data center and cloud solutions, announced its transition from pre-production deployments last year of systems based on the Intel Xeon Phi processor to full production for Penguin’s Tundra product family. “We received early access to the Intel Xeon Phi processor through Penguin Computing and its OCP-based Tundra Extreme Scale (ES) Series,” said James Laros, Principal Member of Technical Staff, Sandia National Laboratories. “We are seeing very promising results to date.”
FrostByte is a complete solution that integrates Penguin Computing’s new Scyld FrostByte software with an optimized high-performance storage platform. FrostByte will support multiple open software storage technologies including Lustre, Ceph, GlusterFS and Swift, and will first be available with Intel Enterprise Edition for Lustre. The entry-level FrostByte is a single rack with 500TB of highly available storage that can deliver up to 18GB/s and 500K/s metadata ops/s over Intel Omni-Path, Mellanox EDR InfiniBand or Penguin Arctica 100GbE network solutions. A single FrostByte “Scalable Unit” can deliver up to 15PB and greater than 500GB/s in 5 racks. Multiple Scalable Units can be combined to scale up to 100s of petabytes and 10s of terabytes/sec of aggregate storage bandwidth.
“Cavium ThunderX has significant differentiation in the 64-bit ARM market as Cavium is the first ARMv8 vendor to deliver dual socket support with full ARMv8.1 implementation and significant advantage in CPU cores with 48 cores per socket. In addition, ThunderX supports large memory capacity (512GB per socket, 1TB in a 2S system) with excellent memory bandwidth and low memory latency. In addition, ThunderX includes multiple 10 GbE / 40GbE network interfaces delivering excellent IO throughput. These features enable ThunderX to deliver the core performance & scale out capability that the HPC market requires.”
Today Penguin Computing announced Open Compute Project (OCP)-based systems that reinforce both its continued collaboration with NVIDIA and new options in Penguin Computing’s Magna family of OpenPOWER-based servers. “Customers benefit when we partner with exceptional organizations like NVIDIA, the OpenPOWER Foundation and Open Compute Foundation in developing our systems,” said Jussi Kukkonen, Director Product Management, Penguin Computing. “An essential part of our mission is to provide customers with form factor flexibility, choice of architecture and peak performance, which are all hallmarks of Penguin Computing.”
NNSA’s next-generation Penguin Computing clusters based on Intel SSF are bolstering “capacity” computing capability at the Tri Labs. “With CTS1 installed in April, the NNSA scientists can continue their stewardship research and management on some of the most advanced commodity clusters the Tri Labs have acquired, ensuring the safety, security, and reliability of the nation’s nuclear stockpile.”
Phil Pokorny from Penguin Computing presented this talk at the Open Compute Project Summit. “Tundra ES delivers the advantages of Open Computing in a single, cost-optimized, high-performance architecture. Organizations can integrate a wide variety of compute, accelerator, storage, network, software and cooling architectures in a vanity-free rack and sled solution. This allows them to build optimized Intel CPU, Phi, ARM or NVIDIA systems with the latest Penguin, Intel or Mellanox high-speed network technology for maximum performance.”
Dr. Lewey Anton reports on who’s moving on up in High Peformance Computing. Familiar names in this edition include: Sharan Kalwani, John Lee, Jay Muelhoefer, Brian Sparks, and Ed Turkel. And be sure to let us know of HPC folks in new positions!
Penguin Computing has renewed as a Platinum Member of Open Compute Project (OCP). Leading with the OCP-based Tundra Extreme Scale (ES) Series, Penguin was recently awarded the CTS-1 contract with the NNSA to bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories.
Penguin Computing in Portland is seeking a Python Software Engineer in our Job of the Week.