Sign up for our newsletter and get the latest HPC news and analysis.

Sponsored Post: Intel Cloud Edition Available for Lustre Software

lustre

“HPC cluster performance is often degraded because more and more data and larger files overwhelm limited hard drive capacity. But if you use Amazon Web Services (AWS), such bottlenecks may be a thing of the past. Intel, in collaboration with AWS, offers a Cloud Edition for Lustre Software that allows customers to use the power of the worlds’ most popular HPC storage system to increase scalability and performance.”

insideHPC Guide to Virtualization, Cloud and HPC

HPC Virtualization Guide - Image

Over the past several years, virtualization has made major inroads into enterprise IT infrastructures. And now it is moving into the realm of high performance computing (HPC), especially for such compute intensive applications as electronic design automation (EDA), life sciences, financial services and digital media entertainment. This article is the first in a series that explores the benefits the HPC community can achieve by adopting proven virtualization and cloud technologies.

Case Study: Designing a High Performance Lustre Storage System

oss

Intel’s White Paper, “Architecting a High-Performance Storage System,” shows you the step-by-step process in the design of a Lustre file system. “Because performance is limited by the slowest component, the design methodology uses a pipeline approach to select and review each part, making sure the system requirements are met. By starting with the backend disk storage, gradually working up the pipeline to the client and employing an iterative design method, the paper show you how a Lustre file system is created.”

Cray CS300-LC Cluster: Why Warm Water is the New “Cool”

Cray CS300

With the rise of manycore processors, double-dense blade form factors, and wider and deeper cabinets, the size and density of HPC systems have grown more than 300 percent since 1999. This high density of “heat offenders” requires a much-more efficient method of temperature control than is possible with air cooling. And while liquid cooling is generally more efficient, not all liquid alternatives are created equal.

New Paper: Toward Exascale Resilience – 2014 update

superfri-thumbnail-blue-without-issn

The all-new Journal of Supercomputing Frontiers and Innovations has published published a new paper entitled: Toward Exascale Resilience – 2014 Update. Written by Franck Cappello, Al Geist, William Gropp, Sanjay Kale, Bill Kramer, and Marc Snir, the paper surveys what the community has learned in the past five years and summarizes the research problems still considered critical by the HPC community.

IBM Platform Computing Offers Cloud Service

IBM Cloud Service

Life sciences, finance, government and numerous other organizations are relying on their HPC clusters for their daily operations. But how can you powerfully scale this type of environment? Learn how cloud services offer you dynamic control over both workloads and resources.

Software Defined Storage for Dummies

software defined storage for dummies

If you work with big data in the cloud or deal with structured and unstructured data for analytics, you need software defined storage. Software defined storage uses standard compute, network, and storage hardware; the storage functions are all done in software.

Managing a Hadoop Cluster

Hadoop Cluster

Hadoop configuration and management is very different than that of HPC clusters. Develop a method to easily deploy, start, stop, and manage a Hadoop cluster to avoid costly delays and configuration headaches. Hadoop clusters have more “moving software parts” than HPC clusters; any Hadoop installation should fit into an existing cluster provisioning and monitoring environment and not require administrators to build Hadoop systems from scratch. Learn about managing a Hadoop cluster from the insideHPC article series on Successful HPC Clusters.

Trending Towards Ultra-Dense Servers

Ultra dense servers

In late 2010 and throughout 2011, however, we noticed a shift in the HPC market as new workloads such as digital media, various financial services applications, new life sciences applications, on-demand cloud computing services and analytics workloads made their way onto HPC servers. We are now seeing another new trend developing in the HPC space with the introduction of ultra-dense servers.

Preparing for HPC Cloud Computing

HPC Cloud

Make sure you use Cloud services that are designed for HPC applications including high-bandwidth, low-latency networking, exclusive node use, and high performance compute/storage capabilities for your application set. Develop a very flexible and quick Cloud provisioning scheme that mirrors your local systems as much as possible, and is integrated with the existing workload manager. An ideal solution is where your existing cluster can be seamlessly extended into the Cloud and managed/monitored in the same way as local clusters. Read more from the insideHPC Guide to Managing HPC Clusters.