Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Supporting Diverse HPC Workloads on a Single Cluster

This sponsored post from Intel explores how Plymouth University’s High Performance Computer Centre (HPCC) used Intel HPC Orchestrator to support diverse workloads as it recently deployed a new 1,500-core cluster. 

High Performance Computing (HPC) is extending its reach into new areas. Not only are modeling and simulation being used more widely, but deep learning and other high performance data analytics (HPDA) applications are becoming essential tools across many disciplines. This expansion is putting new pressures on HPC systems and system administrators. They must be able to support more user groups and a wider range of workloads.

Plymouth University’s High Performance Computer Centre (HPCC), for example, recently deployed a new 1,500-core cluster that will be used to support theoretical physics models that are extremely compute- and fabric-intensive, as well as marine research simulations that are more dependent on storage performance. The cluster will also support projects in genetics, biomedicine, healthcare, mathematics, neuroscience, cyber-security, and more.

cluster

Intel HPC Orchestrator takes the complexity out of managing system software for an HPC cluster with a pre-integrated software stack that is tested, validated at scale, and fully supported.

Powerful Hardware Requires Flexible Software

Even HPC experts are finding this task to be burdensome. That includes Doctor Antonio Rago, a theoretical physicist and the head academic lead for the University of Plymouth HPCC.  To simplify the task, Dr. Rago turned to Intel HPC Orchestrator, a pre-integrated, pre-validated, and Intel supported system software stack based on the Linux Foundation’s OpenHPC project.To address the need for a balanced hardware infrastructure that can efficiently run a wide range of workloads, the university chose compute, fabric, and storage solutions that are part of the Intel Scalable System Framework (Intel SSF). Of course, the system software must also be compatible with the projected applications, and managing middleware for an HPC cluster is a complicated endeavor. Dozens of individual components are required, and it takes deep, multidisciplinary expertise to evaluate the options and select the right mix. Compatible versions and builds must be selected. They must then be integrated, tested, and validated across the application portfolio, and the process must be repeated every time a component is upgraded or patched.

Intel tests and validates Intel HPC Orchestrator in clusters up to 2,000 nodes and in combination with Intel compute, network, and storage solutions.

Intel tests and validates Intel HPC Orchestrator in clusters up to 2,000 nodes and in combination with Intel compute, network, and storage solutions. Intel also tracks changes and security alerts across all components, pushes critical patches, provides professional support, and issues quarterly updates with new features and bug fixes. Additionally, Intel HPC Orchestrator includes a number of proprietary components that provide enhanced capabilities for developing optimized applications and for monitoring, managing, and troubleshooting the cluster.

Reduced Overhead and a New Level of Agility

The new software strategy provided immediate value for the HPCC. According to Dr. Rago, “It took two to three weeks to manually configure the software for our previous cluster. Using Intel HPC Orchestrator, I did it in just two days. Aside from a few additions and relatively minor fixes, everything went straight into place.”

The long-term benefits may be even greater. “Intel HPC Orchestrator makes it easy to create and manage multiple software images, and to quickly reconfigure the cluster,” says Rago. “This can be a big advantage. For example, multi-user management is challenging when running Hadoop, so I created separate images for two different user groups. I just reinstall the software. It’s simple.”

The flexibility of Intel HPC Orchestrator are making a real difference for the University of Plymouth. #HPCClick To Tweet

Faster Development of Fast Applications

Dr. Rago also takes advantage of the software development tools in Intel HPC Orchestrator. In his research, he uses a custom application called Hirep that runs lattice gauge models to explore possible extensions to the Standard Model of elementary particles. His models are extremely compute-intensive, so Dr. Rago uses Intel compilers and performance analysis tools to optimize his code. He finds that the time invested is more than compensated by faster runtimes.

To date, the performance of the Intel hardware and the simplicity and flexibility of Intel HPC Orchestrator are making a real difference for the University of Plymouth. Says Dr. Rago, “It’s as though we’re scaling up a single computer into a large cluster, with no additional overhead.”

The HPCC also benefits from having a software stack that is an integral component of Intel SSF and is continually validated on the latest Intel hardware, such as Intel Xeon and Intel Xeon Phi processors and Intel Omni-Path Architecture. In today’s fast-changing HPC environments, that’s a foundation for success.

Learn More:

Resource Links: