Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: NAG Launches HPC Cost of Ownership Calculator

In this video, Mike Croucher from NAG demonstrates the company’s new Total Cost of Ownership Calculator for HPC. “Should your next HPC procurement be on-premise or in the cloud? This is one of the questions that our clients ask us to help with and part of the answer involves Total Cost of Ownership of the resulting facility. This calculator is provided as a working example of a TCO model.”

Cornell Investigates Multi-cloud Cost Management with RightScale

The Cornell University Center for Advanced Computing (CAC) is collaborating with RightScale, recently acquired by Flexera, to understand how to best manage and optimize costs in a multi-cloud world. “Universities and research facilities are beginning to recognize that cloud management platforms (CMPs) are a useful tool for monitoring and controlling research expenditures, particularly as scientists seek different clouds for different capabilities,” said David A. Lifka, vice president for information technologies and CIO at Cornell. “By working with the Optima team, we’re adding to our current CMP experience and gaining knowledge on how to effectively address the unique needs of research computing and education users.”

HPC Breaks Through to the Cloud: Why It Matters

In this special guest feature, Scot Schultz from Mellanox writes researchers are benefitting in a big way from HPC in the Cloud. “HPC has many different advantages depending on the specific use case, but one aspect that these implementations have in common is their use of RDMA-based fabrics to improve compute performance and reduce latency.”

Micron steps up with NVMe SSDs

Micron recently unveiled its new series of flagship solid-state drives (SSDs) featuring the NVMe protocol, bringing industry-leading storage performance at higher capacities to cloud and enterprise computing markets. The Micron 9300 series of NVMe SSDs enables companies with data-intensive applications to access and process data faster, helping reduce response time.

Is Ubiquitous Cloud Bursting on the Horizon for Universities?

In this special guest feature from Scientific Computing World, Mahesh Pancholi from OCF writes a growing number of universities are taking advantage of public cloud infrastructures that are widely available from large companies like Amazon, Google and Microsoft. “Public cloud providers are surveying the market and partnering with companies, like OCF, for their pedigree in providing solutions to the UK Research Computing community. In order to help Universities take advantage of their products by integrating them with the existing infrastructure such as HPC clusters.”

Personalized Healthcare with High Performance Computing in the Cloud

Wolfgang Gentzsch from the UberCloud gave this talk at the HPC User Forum. “The concept of personalized medicine has its roots deep in genomic research. Indeed, the successful completion of the Human Genome Project in 2003 marked a critical milestone for the field. That project took $3 billion over 13 years. Today, thanks to technological progress, a similar sequencing task would take only about $4,000 and a few weeks. Such computational power is possible thanks to cloud technology, which eliminates the barriers to high-performance computing by removing software and hardware constraints.”

Velocity Compute: PeerCache for HPC Cloud Bursting

In this podcast, Eric Thune from Velocity Compute describes how the company’s PeerCache software optimizes data flow for HPC Cloud Bursting. “By using PeerCache to deliver hybrid cloud bursting, development teams can quickly extend their existing on-premise compute to burst into the cloud for elastic compute power. Your on-premise workflows will run identically in the cloud, without the need for retooling, and the workflow is then moved back to your on-premises servers until the next time you have a peak load.”

Video: High Performance Computing on the Google Cloud Platform

“High performance computing is all about scale and speed. And when you’re backed by Google Cloud’s powerful and flexible infrastructure, you can solve problems faster, reduce queue times for large batch workloads, and relieve compute resource limitations. In this session, we’ll discuss why GCP is a great platform to run high-performance computing workloads. We’ll present best practices, architectural patterns, and how PSO can help your journey. We’ll conclude by demo’ing the deployment of an autoscaling batch system in GCP.”

Best Practices for Building, Deploying & Managing HPC Clusters

In today’s markets, a successful HPC cluster can be a formidable competitive advantage. And many are turning to these tools to stay competitive in the HPC market. That said, these systems are inherently very complex, and have to be built, deployed and managed properly to realize their full potential. A new report from Bright Computing explore best practices for HPC clusters. 

Podcast: Rescale powers Innovation in Antenna Design

In this Big Compute podcast, Gabriel Broner hosts Mike Hollenbeck, founder and CTO at Optisys. Optisys is a startup that is changing the antenna industry. Using HPC in the cloud and 3D printing they are able to design customized antennas which are much smaller, lighter and higher performing than traditional antennas.