There is big push for decreasing the complexity in setting up and managing HPC clusters in the data center. This IBM Webinar, “Bare Metal To Application Ready is Less Than a Day” provides excellent tips for preparing and managing the complexity of an HPC cluster.
As design challenges become more complex and time to product launches are reduced, it is important to understand how to use a cluster for simulation, as compared to just a single node. “HPC Clusters Drive Design Optimization” is an excellent introduction on how to get the most out of a compute cluster.
Clusters FOR DUMMIES is an excellent read for both experienced and novice administrators, users, purchasing departments, and developers who are considering purchasing or specifying a cluster.
Learn how a system built on Cisco UCS using Bright Cluster Manager is a complete HPC Ethernet solution that frees the IT manager, administrator, and users to focus on HPC results instead of managing a complex collection of hardware and software.
Tighter budgets and a stricter regulatory climate are dictating the need for smaller product envelopes and new material choices. Engineers are tasked with these demands against a backdrop of fewer resources and shrinking time-to-market cycles. Now, you’ll learn how advanced simulation software can dramatically shorten the design phase by allowing engineers to virtually optimize and validate new ideas earlier in the process, minimizing the expense of building physical prototypes and streamlining real-world testing.
Engineers are being asked to do more in less time to meet ever-tightening time-to-market schedules. To do so, they need to accelerate design by making use of advanced engineering software. However, such software requires computing processing power not available in a typical engineering workstation. Learn how a cluster can deliver aggregated computing power from its many processors with many cores to meet the processing demands of more complex engineering software, and therefore deliver results faster than individual workstations.
Make sure you use Cloud services that are designed for HPC applications including high-bandwidth, low-latency networking, exclusive node use, and high performance compute/storage capabilities for your application set. Develop a very flexible and quick Cloud provisioning scheme that mirrors your local systems as much as possible, and is integrated with the existing workload manager. An ideal solution is where your existing cluster can be seamlessly extended into the Cloud and managed/monitored in the same way as local clusters. Read more from the insideHPC Guide to Managing HPC Clusters.
Heterogeneous hardware is now present in virtually all clusters. Make sure you can monitor all hardware on all installed clusters in a consistent fashion. With extra work and expertise, some open source tools can be customized for this task. There are few versatile and robust tools with a single comprehensive GUI or CLI interface that can consistently manage all popular HPC hardware and software. Any monitoring solution should not interfere with HPC workloads.
Smaller clusters often overload a single server with multiple services such as file, resource scheduling, plus monitoring/management. While this approach may work for systems with fewer than 100 nodes, these services can overload the cluster network or the single server as the cluster grows. InsideHPC Guide show a plan for scalable HPC cluster growth
HPC systems rely on large amounts of complex software, much of which is freely available. There is an assumption that because the software is “freely available,” there are no associated costs. This is a dangerous assumption. There are real configuration, administration, and maintenance costs associated with any type of software (open or closed).