Sign up for our newsletter and get the latest HPC news and analysis.

Video: With AWS, HPC Now Means ‘High Personal Computing’

aws

Since 2011, ONS (responsible for planning and operating the Brazilian Electric Sector) has been using AWS to run daily simulations using complex mathematical models. The use of the MIT StarCluster toolkit makes running HPC on AWS much less complex and lets ONS provision a high performance cluster in less than 5 minutes.

Slidecast: Cycle Computing Powers 70,000-core AWS Cluster for HGST

stowe

Has Cloud HPC finally made it’s way to the Missing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later. “One of HGST’s engineering workloads seeks to find an optimal advanced drive head design, taking 30 days to complete on an in-house cluster. In layman terms, this workload runs 1 million simulations for designs based upon 22 different design parameters running on 3 drive media Running these simulations using an in-house, specially built simulator, the workload takes approximately 30 days to complete on an internal cluster.”

Radio Free HPC: The Day the Cloud Died

bubble

In this podcast, the Radio Free HPC team discuss the possibility of a future where the Big 3 (Amazon, Google, and Microsoft) figure out that Cloud is not profitable and pull the plug. If that Cloud Apocalypse sounds far fetched, a look at recent AWS revenue numbers may prompt you to stock up your bomb shelter.

ISC Cloud Conference Looks at Barriers to Adoption

David Pellerin, AWS

“Slagter remarked that a cloud environment meant at least three actors had both practical and legal responsibility in keeping data private and secure: the cloud provider itself was responsible for the physical security of the building where the servers were located as well as the security protocols used; the ISV had responsibility for the security of the application that was being run; and the customer had to have a set of security policies and procedures governing who had access to the portal into the cloud and who was licensed, within the customer’s own company, to use the application software and access the data.”

Video: AWS Cloud for HPC and Big Data

pellerin

“High Performance Computing allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. AWS allows you to increase the speed of research by running HPC in the cloud and to reduce costs by providing Cluster Compute or Cluster GPU servers on-demand without large capital investments.”

Slidecast: How Cycle Computing Spun Up a Petascale CycleCloud

Slide1

“For this big workload, a 156,314-core CycleCloud behemoth spanning 8 AWS regions, totaling 1.21 petaFLOPS (RPeak, not RMax) of aggregate compute power, to simulate 205,000 materials, crunched 264 compute years in only 18 hours. Thanks to Cycle’s software and Amazon’s Spot Instances, a supercomputing environment worth $68M if you had bought it, ran 2.3 Million hours of material science, approximately 264 compute-years, of simulation in only 18 hours, cost only $33,000, or $0.16 per molecule.

AWS Powers Largest Genomics Analysis Cluster in the World

1krP

Working with DNAnexus and Amazon Web Services, we were able to rapidly deploy a cloud-based solution that allows us to scale up our support to researchers at the HGSC, and make our Mercury pipeline analysis data accessible to the CHARGE Consortium, enabling what will be the largest genomic analysis project to have ever taken place in the cloud.

Deploying a Lustre Cluster for HPC Applications in the Cloud

snap

Amazon has several storage related services, but there is no shared file system service. It seems there is a need for a parallel file system like Lustre.