Sign up for our newsletter and get the latest HPC news and analysis.

NSF Funds Jetstream Cloud for Science and Engineering

brad

The Pervasive Technology Institute at Indiana University has received this grant to create NSF’s first science and engineering research cloud, Jetstream.

Cornell University CAC to Teach Cloud Computing to Researchers

cornell

The Cornell University Center for Advanced Computing (CAC) has announced that, as part of the NSF-funded Jetstream program, it will be responsible for developing cloud computing training for the US research community.

Radio Free HPC Reviews 2014 – The Year in HPC

bubble

In this podcast, Rich, Dan, and Henry review the year in HPC. It was a wild ride, and here we highlight some of our best shows from the past twelve months and the technology milestones that made 2014 a banner year in high performance computing. And did I mention we have a few laughs along the way?

Video: With AWS, HPC Now Means ‘High Personal Computing’

aws

Since 2011, ONS (responsible for planning and operating the Brazilian Electric Sector) has been using AWS to run daily simulations using complex mathematical models. The use of the MIT StarCluster toolkit makes running HPC on AWS much less complex and lets ONS provision a high performance cluster in less than 5 minutes.

How the ‘C’ in HPC can now Stand for Cloud

HPC Cloud

Most IaaS (infrastructure as a service) vendors such as Rackspace, Amazon and Savvis use various virtualization technologies to manage the underlying hardware they build their offerings on. Unfortunately the virtualization technologies used vary from vendor to vendor and are sometimes kept secret. Therefore, the question about virtual machines versus physical machines for high performance computing (HPC) applications is germane to any discussion of HPC in the cloud.

Slidecast: Cycle Computing Powers 70,000-core AWS Cluster for HGST

stowe

Has Cloud HPC finally made it’s way to the Missing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later. “One of HGST’s engineering workloads seeks to find an optimal advanced drive head design, taking 30 days to complete on an in-house cluster. In layman terms, this workload runs 1 million simulations for designs based upon 22 different design parameters running on 3 drive media Running these simulations using an in-house, specially built simulator, the workload takes approximately 30 days to complete on an internal cluster.”

Avere Introduces FXT Virtual Edge Filer for Amazon EC2

tabor

With the Avere FXT Edge Filer, uou basically get the same hardware operating EC2 cloud as you get from our physical appliances. Then you are on our software and what our software does is, it has the intelligence to automatically cache the active data up in the cloud. It pulls this data either from the Amazon S3 storage cloud or from your data center, from your NAS or object systems that are in your data center, and the goal there is to hide the latency to the storage.

How to Reap the Benefits of the Evolving HPC Cloud

HPC Cloud

In such a demanding and dynamic HPC environment, Cloud Computing technologies, whether deployed as a private cloud or in conjunction with a public cloud, represent a powerful approach to managing technical computing resources. Now, learn how breaking down internal compute silos, by masking underlying HPC complexity to the scientist-clinician researcher user community, and by providing transparency and control to IT managers, cloud computing strategies and tools help organizations of all sizes effectively manage their HPC assets and growing compute workloads that consume them.

Virtual Supercomputer Service Enters Beta

logo

“We would like to provide HPC resources and expertise to a broader business and academic community to accelerate their research and product development. We believe that the Virtual Supercomputer is more than just a technological platform – it is a tool to democratize HPC industry. And this is how the concept of eManufacturing will become a reality.”, says Dmytro Fedyukov, the CEO of Massive Solutions. “We welcome users, datacenters, universities, application developers, and experts to evaluate beta service and join partner alliance to make VSC a success.”

Video: DDN Ships IME Inflinite Memory Engine

ddn

In this video, Molly Rector from DDN describes the company’s new IME Infinite Memory Engine. Recorded right after the DDN User Group Meeting at SC14, celebrations were indeed in order as Molly and Rich enjoy a Hurricane punch during the interview. “IME is a highly-transactional, resilient & reliable “burst buffer cache” for High Performance Computing & Big Data. IME extracts the best performance efficiency across the I/O hierarchy, increasing system reliability multifold, while reducing Exascale I/O TCO by $100Ms.”