With more and more enterprises moving to the cloud, there is a growing demand for developers, information technology professionals, and forward-thinking business leaders with demonstrated knowledge of cloud computing. To meet this need, Amazon Web Services has announced AWS Educate, a major initiative designed to fundamentally transform how cloud computing is taught and how best practices are shared in higher education.
Today ANSYS announced that the company is making its flagship engineering simulation software available on the cloud via Amazon Web Services. The new ANSYS Enterprise Cloud running on AWS enables customers to scale their simulation capacity – including infrastructure and software assets – on demand, in response to changing business requirements, optimizing efficiency and cost while responding to the growing demand for wider use of the technology.
Since 2011, ONS (responsible for planning and operating the Brazilian Electric Sector) has been using AWS to run daily simulations using complex mathematical models. The use of the MIT StarCluster toolkit makes running HPC on AWS much less complex and lets ONS provision a high performance cluster in less than 5 minutes.
Has Cloud HPC finally made it’s way to the Missing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later. “One of HGST’s engineering workloads seeks to find an optimal advanced drive head design, taking 30 days to complete on an in-house cluster. In layman terms, this workload runs 1 million simulations for designs based upon 22 different design parameters running on 3 drive media Running these simulations using an in-house, specially built simulator, the workload takes approximately 30 days to complete on an internal cluster.”
In this podcast, the Radio Free HPC team discuss the possibility of a future where the Big 3 (Amazon, Google, and Microsoft) figure out that Cloud is not profitable and pull the plug. If that Cloud Apocalypse sounds far fetched, a look at recent AWS revenue numbers may prompt you to stock up your bomb shelter.
“Slagter remarked that a cloud environment meant at least three actors had both practical and legal responsibility in keeping data private and secure: the cloud provider itself was responsible for the physical security of the building where the servers were located as well as the security protocols used; the ISV had responsibility for the security of the application that was being run; and the customer had to have a set of security policies and procedures governing who had access to the portal into the cloud and who was licensed, within the customer’s own company, to use the application software and access the data.”
“High Performance Computing allows scientists and engineers to solve complex science, engineering, and business problems using applications that require high bandwidth, low latency networking, and very high compute capabilities. AWS allows you to increase the speed of research by running HPC in the cloud and to reduce costs by providing Cluster Compute or Cluster GPU servers on-demand without large capital investments.”
“For this big workload, a 156,314-core CycleCloud behemoth spanning 8 AWS regions, totaling 1.21 petaFLOPS (RPeak, not RMax) of aggregate compute power, to simulate 205,000 materials, crunched 264 compute years in only 18 hours. Thanks to Cycle’s software and Amazon’s Spot Instances, a supercomputing environment worth $68M if you had bought it, ran 2.3 Million hours of material science, approximately 264 compute-years, of simulation in only 18 hours, cost only $33,000, or $0.16 per molecule.
Working with DNAnexus and Amazon Web Services, we were able to rapidly deploy a cloud-based solution that allows us to scale up our support to researchers at the HGSC, and make our Mercury pipeline analysis data accessible to the CHARGE Consortium, enabling what will be the largest genomic analysis project to have ever taken place in the cloud.