Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


HPC Vendors showcase plans for ARM products

Are you ready? ARM-based HPC systems will be available by the end of 2017. At ISC 2017, Penguin Computing and Bull both announced that they will deliver products tailored for the HPC industry. Both companies systems will feature Cavium’s ARMv8-based ThunderX2 platform.

Penguin Computing FrostByte adds BeeGFS Storage

Today Penguin Computing announced FrostByte with ThinkParQ BeeGFS, the latest member of the family of software-defined storage solutions. FrostByte is Penguin Computing’s scalable storage solution for HPC clusters, high-performance enterprise applications and data intensive analytics. “We are pleased to announce our Gold Partner relationship with ThinkParQ,” said Tom Coull, President and CEO, Penguin Computing. “Together, Penguin Computing and ThinkParQ can deliver a fully supported, scalable storage solution based on BeeGFS, engineered for optimal performance and reliability with best-in-class hardware and expert services.”

Asetek Enters Product Development Agreement with Major Datacenter Player

Today liquid-cooling technology provider Asetek announced that the company has signed a development agreement with a “major player” in the data center space. “This development agreement is the direct result of several years of collaboration and I am very pleased that we have come this far with our partner. I expect this is the major breakthrough we have been waiting for,” said André Sloth Eriksen, CEO and founder of Asetek.

Penguin Computing Releases Scyld ClusterWare 7

“The release of Scyld ClusterWare 7 continues the growth of Penguin’s HPC provisioning software and enables support of large scale clusters ranging to thousands of nodes,” said Victor Gregorio, Senior Vice President of Cloud Services at Penguin Computing. “We are pleased to provide this upgraded version of Scyld ClusterWare to the community for Red Hat Enterprise Linux 7, CentOS 7 and Scientific Linux 7.”

Penguin Computing Lands 9 CTS-1 Open Compute Project Supercomputers on the TOP500

In this video from SC16, Dan Dowling from Penguin Computing describes the company’s momentum with Nine CTS-1 supercomputers on the TOP500. The systems were procured under NNSA’s Tri-Laboratory Commodity Technology Systems program, or CTS-1, to bolster computing for national security at Los Alamos, Sandia and Lawrence Livermore national laboratories. The resulting deployment of these supercomputing clusters is among world’s largest Open Compute-based installations, a major validation of Penguin Computing’s leadership in Open Compute high-performance computing architecture.

Asetek Lands Nine Installations on the Green500

“As seen at installations included on both the Green500 and Top500 lists, Asetek’s distributed liquid cooling architecture enables cluster energy efficiency in addition to sustained and un-throttled cluster performance,” said John Hamill, Vice President of WW Sales and Marketing. “Around the world, data centers are increasingly using Asetek technology for High Performance Computing while reducing energy costs.”

Omni Path Comes to Penguin Computing On-Demand

Today Penguin Computing announced several important achievements of its Penguin Computing On-Demand (POD) HPC cloud service, including a recent 50 percent increase in capacity and plans to double POD’s total capacity in Q1 2017. The upgrade will include new Intel Xeon processors and Intel Omni-Path architecture. “Rapid demand for and growth in our POD business reflects the significant benefits customers are experiencing, particularly since we announced availability of the OCP-compliant Tundra platform on POD late last year,” said Tom Coull, President and CEO, Penguin Computing. “With the Tundra platform, our customers have greater capacity due to faster scaling combined with increased performance and streamlined costs. Tundra on POD also highlights the growth and maturing market role of open computing, with thousands of high-speed, cost-efficient cores available to meet customers’ needs for faster, easier deployment of capacity at a low cost.”

Penguin Computing Adds Pascal GPUs to Open Compute Tundra Systems

“Pairing Tundra Relion X1904GT with our Tundra Relion 1930g, we now have a complete deep learning solution in Open Compute form factor that covers both training and inference requirements,” said William Wu, Director of Product Management at Penguin Computing. “With the ever evolving deep learning market, the X1904GT with its flexible PCI-E topologies eclipses the cookie cutter approach, providing a solution optimized for customers’ respective applications. Our collaboration with NVIDIA is combating the perennial need to overcome scaling challenges for deep learning and HPC.”

Penguin Computing Adds Remote Desktop Collaboration to Scyld Cloud Workstation

Today Penguin Computing announced Scyld Cloud Workstation 3.0, a 3D-accelerated remote desktop solution which provides true multi-user remote desktop collaboration for cloud-based Linux and Windows desktops. “Unlike other remote desktop solutions, collaboration via Scyld Cloud Workstation is more like sitting in-person with other engineers because a user can hand off control of their desktop to simplify collaboration on a project,” said Victor Gregorio, Vice President and General Manager, Cloud Services, Penguin Computing. “Scyld Cloud Workstation brings collaboration to life, providing a much more thorough and proficient interaction among researchers and engineers working together on a remote desktop. Ultimately, this allows customers a more efficient means to leverage cloud-based desktop solutions.”

Penguin Computing Rolls Out Magna 1015 OpenPOWER Servers

Based on the “Barreleye” platform design pioneered by Rackspace and promoted by the OpenPOWER Foundation and the Open Compute Project (OCP) Foundation, Penguin Magna 1015 targets memory and I/O intensive workloads, including high density virtualization and data analytics. The Magna 1015 system uses the Open Rack physical infrastructure defined by the OCP Foundation and adopted by the largest hyperscale data centers, providing operational cost savings from the shared power infrastructure and improved serviceability.