InsideHPC Guide to Technical Computing

Print Friendly, PDF & Email

Many scientists and engineers have found that High Performance Computing (HPC) can solve problems, lead to insights, and generate discoveries faster and more efficiently than any other method — often reducing solution times from days or weeks to hours. This paper and article series offers both HPC users and managers, guidance when considering the best way to deploy a technical computing solution.

Download the insideHPC Guide to Successful Technical Computing - Click Here.

Download the insideHPC Guide to Successful Technical Computing – Click Here.

A practical approach to HPC assumes clearly defined goals that prevent the solution from becoming its own research project. An HPC solution should allow users to focus on their research and not the nuances of the HPC system.

Non-HPC users are often surprised to learn that HPC delivers a competitive advantage and remarkable return on investment. HPC has gone from a specialized back-room art to an essential and competitive tool. Some of the areas that benefit from HPC include Higher Education, Life Sciences, Manufacturing, Government Labs, Oil and Gas, and Weather Modeling and Prediction.

The use of commodity processors had helped bring HPC to the masses. Modern processors, like those from Intel, now employ multiple cores and offer exceptional value. Scaling these processors to attack larger problems can take two forms. The first and most effective method scales up the number of processors using a large pool of shared memory. The second is a clustered scale-out approach where multiple separate servers are combined using high-speed networks.

The scale-up approach has demonstrated success in many areas. In particular, certain bioinformatics problems run exceptionally well on scale-up systems but are not tractable with scale-out designs. The simplicity of scale-up systems also offers advantages with large multiuser systems usually found in academic environments. Similar success has been found in other industries.

The growth of cloudbased computing has become very popular. Before a cloud solution is considered, however, the cost and time of data movement should be determined. In many situations, the location of the data may determine where the computation takes place.

This paper offers those considering HPC, both users and managers, guidance when considering the best way to deploy an HPC solution. Three important questions are suggested that help determine the most appropriate HPC design (scale-up or scale out) that meets your goal and accelerates your discoveries.

Your New Lab Bench of Discovery

Today’s High Performance Computing (HPC) systems offer the ability to model everything from proteins to galaxies. The insights and discoveries offered by these systems are nothing short of astounding. Indeed, the ability to process, move, and store data at unprecedented levels, often reducing jobs from weeks to hours, continues to move science and technology forward at an accelerating pace.

Many scientists and engineers have found that HPC can solve problems, lead to insights, and generate discoveries faster and more efficiently than any other method.  Indeed, modern HPC systems have become the new lab bench of discovery.

Unfortunately, not all researchers have access to HPC resources.  Some of the perceived challenges of the modern HPC lab bench are the complexity and overhead required to use HPC tools and systems. Some of these concerns are real, while other are either incorrect or old notions about how high-end computing success is achieved. All researchers have access to basic computing resources, but the use of HPC has become a standard tool for many scientists and engineers often times augmenting or replacing physical experiments and designs.  Ignoring this capability may impede progress and discovery.

This guide provides a general overview of HPC methodologies and is designed to help first-time researchers understand the trade-offs of various approaches. Based on this information, this paper offers simple guidance and suggestions to help readers implement an effective HPC strategy.

Follow a Practical Approach to HPC Technical Computing

First and foremost, HPC should be implemented like any other research tool. Clearly defined goals and costs are an essential first step. Choosing an area of discovery and metrics to measure success depends largely on a specific domain of interest. A scientist who wants to investigate star formation will have an entirely different set of requirements than an engineer who wants to improve the airflow around a new product.  In some cases, a researcher may define success as scaling a problem size to new levels, while another may determine success by increasing the amount of problem variations that can be run in a single day.

A secondary requirement is to make sure that your HPC solution does not become its own research project. There are many do-it-yourself variations and options available to new HPC practitioners and choosing a turnkey solution is imperative to success.

Finally, there are many aspects worth considering when evaluating HPC solutions. Obviously, the solution that allows users to focus on their research and not the nuances of the HPC system are important. These include availability and efficiency of software applications for a given research area. In addition, if needed a workable and maintainable software development environment should also be considered. System administration is also another important aspect and potential cost area.

Over the next few week this article series will explore:

If you prefer you can download the complete insideHPC Guide to Successful Technical Computing, courtesy of SGI and Intel – Click Here.

Comments

  1. Worth reading. Just a little update though on HPC Cloud which made huge progress in the last 12 months. The cloud concerns mentioned – security and data transfer – are no longer real roadblocks. In the meantime HPC Clouds are as secure (if not more) as companies’ on-premise clusters, and a secure data transfer can be guarantied e.g. with end-to-end en- and de-cryption, and with VPN on the fly. And data transfer especially for an average engineering workload of a few tens of GBytes can be accelerated (up to factors of 100) with tools like VCollab or with accelerated remote viz through NICE DCV. And new HPC software containers simplify access and use of applications and data in the cloud dramatically, comparable to access and use of your workstation.