Three Questions to Ensure Your HPC Success

Successful HPC computing depends on choosing the architecture that addresses both application and institutional needs. In particular, finding a simple path to leading edge HPC and Data Analytics is not difficult, if you consider the capabilities and limitations of various approaches to HPC performance, scaling, ease of use, and time to solution. Careful analysis and consideration of the following questions will help lead to a successful and cost-effective HPC solution.

1. Do you need to Scale-Up or Scale Out?

There are several things to consider when answering this question. First, do you have applications that require (or run best using) a scale-up CSM memory platform? Scale-up systems offer bigger memory footprints, more flexibility, easier programming, and better utilization than scale-out systems. Remember all scale-out cluster applications (MPI jobs) run on a scale-up system.

Second, how much extra work (or training) can you place on your system administrators? A scale-up machine like the SGI UV line operates as a large server with a single OS instance. Cluster-based solutions have many discrete OS instances and require more administration with complicated overhead.

2. Where are the data generated (local or cloud)?

Answering this question is not difficult. If data are generated and live in the cloud then processing in the cloud makes the most sense. If however, data files are local and large, local processing is the most effective approach.

3. How quickly do you need your discovery?

This final question depends on several issues. First, consider the time it takes to adapt applications to a scale-out cluster. A scale-up CSM system can run all existing software right away (including software that uses accelerators) and easily provides more memory when needed. For example, if user applications compile and run on other Linux-based systems, they will compile and run on a SGI UV system with minimal effort.

Second, writing new applications (or modifying old applications) is also simpler than targeting a scale-out cluster.  Users can remain focused on the application science rather than managing communications and data placement on a cluster.

Finally, scale-up systems can offer easier administration and better utilization (more user jobs per time) than scale-out clusters.

Download the insideHPC Guide to Successful Technical Computing - Click Here.

Download the insideHPC Guide to Successful Technical Computing – Click Here.

This the final article in a series on insideHPC’s Guide to Successful Technical Computing.

A Safe and Simple Path Forward

Enabling discovery should be the goal of any HPC or Data Analytics system because modern HPC capabilities are easily available.  The speed and precision with which you can deliver and sustain these resources will improve utilization and speed-up the delivery of results  – often reducing job run times from weeks to hours. Deploying a scale-up (CSM) solution offers the ability to address some of the largest HPC problems and at the same time provide the most efficient way to introduce the power of HPC into your workflows. The design of CSM systems brings simplicity to both users and administrators further contributing to the overall total cost of ownership. The scale-up design, like that found in the SGI UV line of systems, provides a simple, safe, and immediate path to accelerating discovery for both new and experienced HPC users.

Download the complete insideHPC Guide to Successful Technical Computing, courtesy of SGI and Intel – Click Here.