Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Google Compute Engine offers VMs with 96 Skylake CPUs and 624GB of Memory

Google Compute Engine now offers new VMs with the most Skylake vCPUs of any cloud provider. “Skylake in turn provides up to 20% faster compute performance, 82% faster HPC performance, and almost 2X the memory bandwidth compared with the previous generation Xeon. Need even more compute power or memory? We’re also working on a range of new, even larger VMs, with up to 4TB of memory.”

Kubernetes Meets HPC

“While the notion of packaging a workload into a Docker container, publishing it to a registry, and submitting a YAML description of the workload is second nature to users of Kubernetes, this is foreign to most HPC users. An analyst running models in R, MATLAB or Stata simply wants to submit their simulation quickly, monitor their execution, and get a result as quickly as possible.”

Embry-Riddle University Deploys Cray CS Supercomputer for Aerospace

Today Embry-Riddle Aeronautical University announced it has deployed a Cray CS400 supercomputer. The four-cabinet system will power collaborative applied research with industry partners at the University’s new research facility – the John Mica Engineering and Aerospace Innovation Complex (“MicaPlex”) at Embry-Riddle Research Park.

VMware Rolls Out vSphere Scale-Out Edition for Big Data and HPC Workloads

Today VMware introduced vSphere Scale-Out Edition for Big Data and HPC Workloads, a new solution in the vSphere product line aimed at Big Data and HPC workloads. VMware vSphere Scale-Out edition includes the features and functions most useful to Big Data and HPC workloads such as those provided by the core vSphere hypervisor and the vSphere Distributed Switch. “This new solution also enables the ability to rapidly change and provision compute nodes. The solution will be offered at an attractive price point, optimized for Big Data and HPC environments.”

Mellanox BlueField Accelerates NVMe over Fabrics

Today Mellanox announced the availability of storage reference platforms based on its revolutionary BlueField System-on-Chip (SoC), combining a programmable multicore CPU, networking, storage, security, and virtualization acceleration engines into a single, highly integrated device. “BlueField is the most highly integrated NVMe over Fabrics solution,” said Michael Kagan, CTO of Mellanox. “By tightly integrating high-speed networking, programmable ARM cores, PCIe switching, cache, memory management, and smart offload technology all in one chip; the result is improved performance, power consumption, and affordability for flash storage arrays. BlueField is a key part of our Ethernet Storage Fabric solution, which is the most efficient way to network and share high-performance storage.”

Supercomputing by API: Connecting Modern Web Apps to HPC

In this video from OpenStack Australia, David Perry from the University of Melbourne presents: Supercomputing by API – Connecting Modern Web Apps to HPC. “OpenStack is a free and open-source set of software tools for building and managing cloud computing platforms for public and private clouds. OpenStack Australia Day is the region’s largest, and Australia’s best, conference focusing on Open Source cloud technology. Gathering users, vendors and solution providers, OpenStack Australia Day is an industry event to showcase the latest technologies and share real-world experiences of the next wave of IT virtualization.”

RCE Podcast Looks at Shifter Containers for HPC

In this RCE Podcast, Brock Palen and Jeff Squyres speak with Shane Canon and Doug Jacobsen from NERSC, the authors of Shifter. “Shifter is a prototype implementation that NERSC is developing and experimenting with as a scalable way of deploying containers in an HPC environment. It works by converting user or staff generated images in Docker, Virtual Machines, or CHOS (another method for delivering flexible environments) to a common format.”

How Charliecloud Simplifies Big Data Supercomputing at LANL

“Los Alamos has lots of supercomputing power, and we do lots of simulations that are well supported here. But we’ve found that Big Data analysis projects need to use different frameworks, which often have dependencies that differ from what we have already on the supercomputer. So, we’ve developed a lightweight ‘container’ approach that lets users package their own user defined software stack in isolation from the host operating system.”

Learning from ZFS to Scale Storage on and under Containers

“What is so new about the container environment that a new class of storage software is emerging to address these use cases? And can container orchestration systems themselves be part of the solution? As is often the case in storage, metadata matters here. We are implementing in the open source OpenEBS.io some approaches that are in some regards inspired by ZFS to enable much more efficient scale out block storage for containers that itself is containerized. The goal is to enable storage to be treated in many regards as just another application while, of course, also providing storage services to stateful applications in the environment.”

ISC 2017 Workshop Preview: Optimizing Linux Containers for HPC & Big Data Workloads

Christian Kniep is hosting a half-day Linux Container Workshop on Optimizing IT Infrastructure and High-Performance Workloads on June 23 in Frankfurt. “Docker as the dominant flavor of Linux Container continues to gain momentum within datacenter all over the world. It is able to benefit legacy infrastructure by leveraging the lower overhead compared to traditional, hypervisor-based virtualization. But there is more to Linux Containers – and Docker in particular, which this workshop will explore.”