If you open the back of today’s HPC cluster you will see lots of cables. There are any number of cables including those for power, Ethernet, InfiniBand , Fibre Channel, KVM, and others. The current situation creates the need for complex configuration and administration.
Hadoop configuration and management is very different than that of HPC clusters. Develop a method to easily deploy, start, stop, and manage a Hadoop cluster to avoid costly delays and configuration headaches. Hadoop clusters have more “moving software parts” than HPC clusters; any Hadoop installation should fit into an existing cluster provisioning and monitoring environment and not require administrators to build Hadoop systems from scratch. Learn about managing a Hadoop cluster from the insideHPC article series on Successful HPC Clusters.
Make sure you use Cloud services that are designed for HPC applications including high-bandwidth, low-latency networking, exclusive node use, and high performance compute/storage capabilities for your application set. Develop a very flexible and quick Cloud provisioning scheme that mirrors your local systems as much as possible, and is integrated with the existing workload manager. An ideal solution is where your existing cluster can be seamlessly extended into the Cloud and managed/monitored in the same way as local clusters. Read more from the insideHPC Guide to Managing HPC Clusters.
Heterogeneous hardware is now present in virtually all clusters. Make sure you can monitor all hardware on all installed clusters in a consistent fashion. With extra work and expertise, some open source tools can be customized for this task. There are few versatile and robust tools with a single comprehensive GUI or CLI interface that can consistently manage all popular HPC hardware and software. Any monitoring solution should not interfere with HPC workloads.
Smaller clusters often overload a single server with multiple services such as file, resource scheduling, plus monitoring/management. While this approach may work for systems with fewer than 100 nodes, these services can overload the cluster network or the single server as the cluster grows. InsideHPC Guide show a plan for scalable HPC cluster growth
HPC systems rely on large amounts of complex software, much of which is freely available. There is an assumption that because the software is “freely available,” there are no associated costs. This is a dangerous assumption. There are real configuration, administration, and maintenance costs associated with any type of software (open or closed).
The basic HPC cluster consists of at least one management/login node connected to a network of many worker nodes. Depending on the size of the cluster, there may be multiple management nodes used to run cluster-wide services, such as monitoring, workflow, and storage services. This insideHPC article series looks at the Five Essential Strategies for Managing HPC Clusters.
Currently, there are many trends in HPC clustering that include software complexity, cluster growth and scalability, system heterogeneity, Cloud computing, as well as the introduction of Hadoop services. Without a cogent strategy to address these issues, system managers and administrators can expect less-than-ideal performance and utilization. There are many component tools and best practices to be found throughout the industry. To help our audience build and manage successful HPC Clusters the editors of insideHPC have created this article series called “the Five Essential Strategies for Successful HPC Clusters.”