Today Penguin Computing announced Scyld Cloud Workstation 3.0, a 3D-accelerated remote desktop solution which provides true multi-user remote desktop collaboration for cloud-based Linux and Windows desktops. “Unlike other remote desktop solutions, collaboration via Scyld Cloud Workstation is more like sitting in-person with other engineers because a user can hand off control of their desktop to simplify collaboration on a project,” said Victor Gregorio, Vice President and General Manager, Cloud Services, Penguin Computing. “Scyld Cloud Workstation brings collaboration to life, providing a much more thorough and proficient interaction among researchers and engineers working together on a remote desktop. Ultimately, this allows customers a more efficient means to leverage cloud-based desktop solutions.”
Based on the “Barreleye” platform design pioneered by Rackspace and promoted by the OpenPOWER Foundation and the Open Compute Project (OCP) Foundation, Penguin Magna 1015 targets memory and I/O intensive workloads, including high density virtualization and data analytics. The Magna 1015 system uses the Open Rack physical infrastructure defined by the OCP Foundation and adopted by the largest hyperscale data centers, providing operational cost savings from the shared power infrastructure and improved serviceability.
IT organizations are facing increasing pressure to deliver critical services to their users while their budgets are either reduced or maintained at current levels. New technologies have the potential to deliver industry-changing information to users who need data in real time, but only if the IT infrastructure is designed and implemented to do so. While computing power continues to decline in cost, the management of large data centers, together with the associated costs of running these data centers, increases. The server administration over the life of the computer asset will consume about 75 percent of the total cost.
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Today Penguin Computing announced that it is delivering an energy-efficient, HPC cluster to the University of Alaska Fairbanks. The new cluster, based on Penguin Computing’s Relion server family was first delivered in April 2016 and has been incrementally expanding throughout the year. The cluster was named Chinook in honor of deceased UAF employee Kevin Engle, who was known for his passion for salmon and Alaska. Engle was a research programmer and ground station manager at UAF’s Geographic Information Network of Alaska.
Penguin Computing, a provider of high performance, enterprise data center and cloud solutions, announced its transition from pre-production deployments last year of systems based on the Intel Xeon Phi processor to full production for Penguin’s Tundra product family. “We received early access to the Intel Xeon Phi processor through Penguin Computing and its OCP-based Tundra Extreme Scale (ES) Series,” said James Laros, Principal Member of Technical Staff, Sandia National Laboratories. “We are seeing very promising results to date.”
FrostByte is a complete solution that integrates Penguin Computing’s new Scyld FrostByte software with an optimized high-performance storage platform. FrostByte will support multiple open software storage technologies including Lustre, Ceph, GlusterFS and Swift, and will first be available with Intel Enterprise Edition for Lustre. The entry-level FrostByte is a single rack with 500TB of highly available storage that can deliver up to 18GB/s and 500K/s metadata ops/s over Intel Omni-Path, Mellanox EDR InfiniBand or Penguin Arctica 100GbE network solutions. A single FrostByte “Scalable Unit” can deliver up to 15PB and greater than 500GB/s in 5 racks. Multiple Scalable Units can be combined to scale up to 100s of petabytes and 10s of terabytes/sec of aggregate storage bandwidth.
“Cavium ThunderX has significant differentiation in the 64-bit ARM market as Cavium is the first ARMv8 vendor to deliver dual socket support with full ARMv8.1 implementation and significant advantage in CPU cores with 48 cores per socket. In addition, ThunderX supports large memory capacity (512GB per socket, 1TB in a 2S system) with excellent memory bandwidth and low memory latency. In addition, ThunderX includes multiple 10 GbE / 40GbE network interfaces delivering excellent IO throughput. These features enable ThunderX to deliver the core performance & scale out capability that the HPC market requires.”
Today Penguin Computing announced Open Compute Project (OCP)-based systems that reinforce both its continued collaboration with NVIDIA and new options in Penguin Computing’s Magna family of OpenPOWER-based servers. “Customers benefit when we partner with exceptional organizations like NVIDIA, the OpenPOWER Foundation and Open Compute Foundation in developing our systems,” said Jussi Kukkonen, Director Product Management, Penguin Computing. “An essential part of our mission is to provide customers with form factor flexibility, choice of architecture and peak performance, which are all hallmarks of Penguin Computing.”
NNSA’s next-generation Penguin Computing clusters based on Intel SSF are bolstering “capacity” computing capability at the Tri Labs. “With CTS1 installed in April, the NNSA scientists can continue their stewardship research and management on some of the most advanced commodity clusters the Tri Labs have acquired, ensuring the safety, security, and reliability of the nation’s nuclear stockpile.”