“Our annual ‘Budget Map’ report series looks at the relative spending between all of the products, components, and services that make up the HPC market. With six years of end user data, we get a strong grip on where the money is flowing, whether it’s on big items like clusters and storage, or on topical things like power consumption, programming, or compute cycles in public cloud. We also get a sense of future budget outlook and how the market is likely to evolve.”
The Prime Challenge launched by Microsoft Azure in November last year has come to an end with a user registered as PHunterLau being declared the winner. PHunterLau emerged as the winner after discovering a prime number that was over 342,000 digits long.
“Why not pay only for what you need, when you need it? Using our JARVICE supercomputer, we can deliver 120 teraFLOPS for less than $200 per hour. The same electromagnetic simulation costing close to $600 on the ManeFrame (in slice of CAPEX cost alone) can be yours for only $156 total on the Nimbix cloud. And unlike the ManeFrame, it shuts off automatically when it’s done, so you don’t keep paying even when it’s idle. The $156 includes space, cooling, power, and humans to help make sure it runs smoothly as well. Don’t forget that you can’t just “slice off” $600 of ManeFrame – you still have to invest $6.5 million, plus the operating costs. With JARVICE, the $156 is the total amount you pay for the example job with no other strings attached.”
“In addition to NAS, you can also create parallel storage solutions. For example, in Amazon AWS, there are two options, one for Lustre, and one for OrangeFS (PVFS). Both use the same compute and storage instances that you use for NAS, but you create several instances that are combined to create a single file system. If you need more performance, just add more instances. If you need more capacity, just add more instances. Since this is the cloud, it’s very easy to spin up a new instance and add it to the existing storage.”
“Penguin Computing is the largest private supplier of complete high performance computing (HPC) solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing on Demand (POD). Penguin Computing also applies its core expertise in the field of distributed large-scale enterprise computing delivering scale-out compute, storage, virtualization, and cloud solutions for organizations looking to take advantage of modern open data center architectures.”
“There is a shift underway where researchers, engineers, and analysts, can change the very way they think about problems. Previously, we have been limited by the computing resources we have — the clusters we have on premise. Today, we can change the very way we ask our questions. Ask the right questions — and use the Cloud to create the size of system needed to answer your questions.”
“With qwikLABS, users can create, manage and run labs anytime. Labs are delivered via the public cloud to classrooms, events or online; anywhere there is access to the Internet. qwikLABS is used by lab creators, instructors/trainers, administrators, coordinators and students around the world. The qwikLABS platform users are able to create, manage, deploy and run lab environments around the clock and around the world, and do so in a way complementary to the business or education institution’s flow of assignments, modules, classes, courses.”
Over at HPC Magazine, Wolfgang Gentzsch and Burak Yenier write that high performance computing in the cloud is now becoming a reality. For many, getting there entails reviewing (and demystifying) the issues traditionally associated with Cloud HPC, including performance, cost, software licensing, and security.