The Pervasive Technology Institute at Indiana University has received this grant to create NSF’s first science and engineering research cloud, Jetstream.
Since 2011, ONS (responsible for planning and operating the Brazilian Electric Sector) has been using AWS to run daily simulations using complex mathematical models. The use of the MIT StarCluster toolkit makes running HPC on AWS much less complex and lets ONS provision a high performance cluster in less than 5 minutes.
Most IaaS (infrastructure as a service) vendors such as Rackspace, Amazon and Savvis use various virtualization technologies to manage the underlying hardware they build their offerings on. Unfortunately the virtualization technologies used vary from vendor to vendor and are sometimes kept secret. Therefore, the question about virtual machines versus physical machines for high performance computing (HPC) applications is germane to any discussion of HPC in the cloud.
Has Cloud HPC finally made it’s way to the Missing Middle? In this slidecast, Jason Stowe from Cycle Computing describes how the company enabled HGST to spin up a 70,000-core cluster from AWS and then return it 8 hours later. “One of HGST’s engineering workloads seeks to find an optimal advanced drive head design, taking 30 days to complete on an in-house cluster. In layman terms, this workload runs 1 million simulations for designs based upon 22 different design parameters running on 3 drive media Running these simulations using an in-house, specially built simulator, the workload takes approximately 30 days to complete on an internal cluster.”
With the Avere FXT Edge Filer, uou basically get the same hardware operating EC2 cloud as you get from our physical appliances. Then you are on our software and what our software does is, it has the intelligence to automatically cache the active data up in the cloud. It pulls this data either from the Amazon S3 storage cloud or from your data center, from your NAS or object systems that are in your data center, and the goal there is to hide the latency to the storage.
In such a demanding and dynamic HPC environment, Cloud Computing technologies, whether deployed as a private cloud or in conjunction with a public cloud, represent a powerful approach to managing technical computing resources. Now, learn how breaking down internal compute silos, by masking underlying HPC complexity to the scientist-clinician researcher user community, and by providing transparency and control to IT managers, cloud computing strategies and tools help organizations of all sizes effectively manage their HPC assets and growing compute workloads that consume them.
“We would like to provide HPC resources and expertise to a broader business and academic community to accelerate their research and product development. We believe that the Virtual Supercomputer is more than just a technological platform – it is a tool to democratize HPC industry. And this is how the concept of eManufacturing will become a reality.”, says Dmytro Fedyukov, the CEO of Massive Solutions. “We welcome users, datacenters, universities, application developers, and experts to evaluate beta service and join partner alliance to make VSC a success.”
In this video, Molly Rector from DDN describes the company’s new IME Infinite Memory Engine. Recorded right after the DDN User Group Meeting at SC14, celebrations were indeed in order as Molly and Rich enjoy a Hurricane punch during the interview. “IME is a highly-transactional, resilient & reliable “burst buffer cache” for High Performance Computing & Big Data. IME extracts the best performance efficiency across the I/O hierarchy, increasing system reliability multifold, while reducing Exascale I/O TCO by $100Ms.”