Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Moving HPC Workloads to the Cloud with Avere Systems at SC17

In this video from SC17 in Denver, Bernie Behn from Avere Systems describes how the company helps customers migrate HPC workloads to the Cloud.

“HPC workloads are incredibly large, encompassing datasets as large as several petabytes. With matching storage and compute requirements, organizations are determining how to best use the vast resources offered by cloud service providers to fill any gaps. However, large file sizes create difficulties when trying to move HPC data to these remote resources. Avere Systems helps solve these challenges to make HPC in the cloud a viable option.”

Traditional methods of moving data are expensive and very time-consuming. These processes often negate the value-add that the cloud offers. Moving all of your data to the cloud is not necessary in order to use cloud compute for an individual application’s workload. In fact, you don’t need to move large data sets at all. Cloud caching filers can often take on the data required to run each job, putting the data migration portion all onto this caching appliance.

The large datasets do not to leave your data center. The necessary HPC data (a small percentage of its total) is migrated via the caching filer to the application running in the cloud, where it is then used by the app. Then once it is finished, the filer sends that data back to its on-prem location.

If you were using a typical model that was entirely on-prem, you would need to move the data to free the local machine so that it can do the next run. With the cloud, you are able to deploy and tear down resources on-demand as you need them. Once your workloads are finished running, you can stop the billing for your compute usage, and at the same time you haven’t had to purchase additional hardware.

See our complete coverage of SC17

Sign up for our insideHPC Newsletter

Leave a Comment

*

Resource Links: