Sign up for our newsletter and get the latest HPC news and analysis.

Interview: Why Software Defined Infrastructure Makes Sense for HPC

Jay Muelhoefer, IBM

“I came to IBM via the acquisition of Platform Computing. There’s also been other IBM assets around HPC, namely GPFS. What’s been the evolution of those items as well and how they really come together under this concept of software-defined infrastructure, and how we’re now taking these capabilities and expanding them into other initiatives that have sort of bled into the HPC space.”

Izmir Institute of Technology Manages HPC with Bright Computing

bright-cluster-manager-standard

Today Bright Computing announced that Izmir Institute of Technology (IYTE) is using the company’s software to manage its HPC infrastructure.

Slidecast: Software Defined Infrastructure for HPC

jay

“Imagine an entire IT infrastructure controlled not by hands and hardware, but by software. One in which application workloads such as big data, analytics, simulation and design are serviced automatically by the most appropriate resource, whether running locally or in the cloud. A Software Defined Infrastructure enables your organization to deliver IT services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs.”

Penguin Computing Launches Scyld ClusterWare for Hadoop

penguin-computing-logo

Today Penguin Computing announced Scyld ClusterWare for Hadoop, adding greater capability to the company’s existing Scyld ClusterWare high performance computing cluster management solution.

Inside Lustre Hierarchical Storage Management (HSM)

Hierarchical Storage Management

There is always different levels of importance assigned to various data files in a computer system, specifically a very large system that is storing petabytes of data. In order to maximize the use of the highest speed storage, Hierarchical Storage Management (HSM) was developed to move and store data within easy use of users, yet at the appropriate speed and price.

A Merit Based Priority Scheme to Optimize the Use of Computing Infrastructure

Gowtham

“Following up on the “Streamlining Research Computing Infrastructure: A Small School’s Perspective”, this talk will discuss how a researcher’s priority in a shared HPC cluster is computed based on her/his usage pattern as well as productivity. Coupled with transparent functionality and reporting scheme at nearly every level, this merit based priority scheme has enabled consistently high usage (85+%) throughout 20 months of operation and has helped researchers at Michigan Technological University to produce 30+ publications from approximately 30 different projects.”

Video: Moab Adds Elastic Computing Features

moab

“We received an overwhelmingly positive response to the new Moab features during SC14, so we¹re very excited to make the new features generally available. In a competitive computing landscape where enterprises need to accelerate insights, Moab matters,” said Rob Clyde, CEO of Adaptive Computing. “Automating workload workflows is imperative to shorten the timeline to discovery, and this latest version of Moab represents a huge step forward in helping enterprises achieve that. We are excited to reveal our latest innovations and continue driving competitive advantage for our customers.”

Adaptive Computing Rolls Out Moab 8.1

adaptive

Moab 8.1 systems management software includes a revamped Web-based user interface with bolstered reporting and tracking capabilities that give greater insight into the job states, workloads and nodes of a HPC system; massive performance gains and improvements in scale; and system improvements to achieve elastic computing to expand to other resources as workloads demand.

Converged Solutions for HPC and Big Data using Clusters and Clouds

bright

In this video from SC14, Ian Lumb from Bright Computing presents: Converged solutions for HPC and Big Data Analytics using Clusters and Clouds.