Penguin Computing Expands Altus Product Family with AMD EPYC 7002

Penguin Computing just announced the availability of AMD EPYC 7002 Series Processors for Penguin Computing’s Altus server platform. AMD EPYC 7002 Series Processors are expected to deliver up to 2X the performance-per-socket and up to 4X peak FLOPS per-socket over AMD EPYC 7001 Series Processors. These advantages enable customers to transform their infrastructure with the right resources to drive performance and reduce bottlenecks. “We’ve been waiting for this processor, which enables us to deliver breakthrough performance in solutions designed for AI and HPC workloads. In particular, we expect the EPYC 7002 to utilize PCIe Gen 4 to bolster workloads that had been bottlenecked by the bandwidth of PCIe Gen 3.”

Submer SmartPodX Platform launches for OCP Immersion Cooling

Today the European Submer startup announced that their latest SmartPod Immersion Cooling System conforms to both standard server formats and to Open Compute Project (OCP) specifications for high-performance, supercomputing, and hyperscale infrastructures – the SmartPodX. “The Open Compute Project exists to develop open standards that bring greater efficiency, scalability, openness, and positive impact to datacenters and hardware – making all of our lives better,” said Submer CEO Daniel Pope. “This makes the OCP Global Summit the perfect opportunity to launch our hyper-efficient SmartPodX that will power the next generation of high-performance servers and supercomputers that usher in the next wave of research and technical innovation.”

NVMe Over Fabrics High performance SSDs Networked for Composable Infrastructure

Rob Davis from Mellanox gave this talk at the 2018 OCP Summit. “There is a new very high performance open source SSD interfaced called NVMe over Fabrics now available to expand the capabilities of networked storage solutions. It is an extension of the local NVMe SSD interface developed a few years ago driven by the need for a faster interface for SSDs. Similar to the way native disk drive SCSI protocol was networked with Fibre Channel 20 years ago, this technology enables NVMe SSDs to be networked and shared with their native protocol. By utilizes ultra-low latency RDMA technology to achieve data sharing across a network without sacrificing the local performance characteristics of NVMe SSDs, true composable infrastructure is now possible.”

Cavium’s ThunderX2 Processors coming to Penguin Computing Tundra OCP Platform

Today Penguin Computing, a provider of high-performance computing, enterprise datacenter and cloud solutions, announced availability of its Tundra Extreme Scale (ES) server platforms based on Cavium second-generation ARMv8-based ThunderX2 processors. Tundra ES Valkre servers powered by ThunderX2 processors are now available for public order, and a standard 19” rack mount models will ship in 3rd Calendar Quarter 2017. “Penguin Computing is the leading developer of open, Linux-based, HPC, cloud, and enterprise data center solutions,” said Jussi Kukkonen, Vice President, Advanced Solutions, Penguin Computing. “By extending our product roadmap to Cavium’s second generation 64-bit ARMv8 CPUs in our Tundra family of Open Compute servers we again step up our leadership position. Our customers get outstanding value from the efficiency and flexibility enabled by OCP infrastructure combined with best-in-class compute performance coming from Cavium’s ThunderX2 offering.”

Cavium ThunderX2 Processors Power new Baymax HyperScale Server Platforms

Today Inventec in Taiwan announced Baymax, a new server platform optimized for Cloud compute, high-performance cloud storage and Big Data applications based on Cavium’s second-generation 64-bit ARMv8 ThunderX2 processors. “Inventec’s success as world’s largest server ODM has been based on our compelling designs and manufacturing expertise and our ability to deliver leading edge cost effective server platforms to world’s largest mega scale datacenters,” said Evan Chien, Senior Director of Inventec Server Business Unit 6. “Earlier this year Inventec’s customers requested platforms based on Cavium’s ThunderX2 ARMv8 processors, and the new Baymax platform is the first platform being delivered.”

Overview of the HGX-1 AI Accelerator Chassis

“The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.”

Video: SSD – The Transition from 2D to 3D NAND

“In 2013, Western Digital acquired flash storage hardware and software supplier, Virident, for $685 million in cash. They followed that up in May 2016, with the acquisition of SanDisk Corporation. The addition of SanDisk makes Western Digital Corporation a comprehensive storage solutions provider with global reach, and an extensive product and technology platform that includes deep expertise in both rotating magnetic storage and non-volatile memory (NVM).”

Nvidia Brings AI to the Cloud with the HGX-1 Hyperscale GPU Accelerator

Today, Microsoft, NVIDIA, and Ingrasys announced a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. “Powered by eight NVIDIA Tesla P100 GPUs in each chassis, HGX-1 features an innovative switching design based on NVIDIA NVLink interconnect technology and the PCIe standard, enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations.”

Penguin Computing Lands 9 CTS-1 Open Compute Project Supercomputers on the TOP500

In this video from SC16, Dan Dowling from Penguin Computing describes the company’s momentum with Nine CTS-1 supercomputers on the TOP500. The systems were procured under NNSA’s Tri-Laboratory Commodity Technology Systems program, or CTS-1, to bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories. The resulting deployment of these supercomputing clusters is among world’s largest Open Compute-based installations, a major validation of Penguin Computing’s leadership in Open Compute high-performance computing architecture.

Penguin Computing Rolls Out Magna 1015 OpenPOWER Servers

Based on the “Barreleye” platform design pioneered by Rackspace and promoted by the OpenPOWER Foundation and the Open Compute Project (OCP) Foundation, Penguin Magna 1015 targets memory and I/O intensive workloads, including high density virtualization and data analytics. The Magna 1015 system uses the Open Rack physical infrastructure defined by the OCP Foundation and adopted by the largest hyperscale data centers, providing operational cost savings from the shared power infrastructure and improved serviceability.