For more about what hardware you need to build a cluster, see the other articles in HPC 101.
You’ve been shopping at your favorite hardware vendor, and you have a cluster all picked out. Number of processors? Check. Interconnect? Check. Disk and memory? Check and check.
So what do you have now? A really expensive paperweight. Although its not the first thing that people think of when they are bragging about their cluster at Labor Day picnics, all that hardware is completely useless without the software that makes it do something.
Just like the computer on your desk or in your lap, your cluster will need an operating system. The odds are pretty good that the computers you already have run some version of Windows. But they might run Mac OS X, or even a version of the open source operating system Linux.
The good news is that these same operating systems will also run on your cluster (although there aren’t a lot of people who run Mac OS X on their cluster, it is certainly possible, and Virginia Tech even built one of the largest clusters in the world using all Mac hardware and software). But just because you are used to running a certain operating system on your desk doesn’t mean that it’s the right choice for your cluster.
The most important factor that will drive your choice of operating system is the applications that you need to run on your cluster. If you are doing your parallel computing using special spreadsheets written in Excel, then Microsoft’s cluster version of their operating system (called Windows HPC Server) is the right choice for you. On the other hand your applications may require a variant of Linux, such as Red Hat or SUSE. Check with your application provider (or your application developer, if you are using a custom application) to figure out what operating system you need.
Installing an operating system on each computer in your cluster will get you a bunch of computers that will do something, but it doesn’t necessarily get you a cluster of computers that will work together as a team. You (or your administrator) will need to be able to discover all of the computers in the cluster, schedule jobs, turn nodes on and off, write new applications, and perform other system administration tasks.
In some cases (as with Windows HPC Server) many of the tools you need to make your computers act like a cluster are included. But in other cases, as with Linux distributions for example, you’ll probably need to pick a separate cluster software distribution. There are several to choose from, and names that you might hear from your cluster provider include Scyld Clusterware, ClusterCorp Rocks+, Platform Cluster Manager, Red Hat HPC Solution, Sun HPC Software (Linux Edition), OSCAR, and Rocks.
Keeping your cluster busy
One of the important tasks that clusters need to do is to schedule jobs. A job is any unit of work that you want your cluster to do: run your Excel spreadsheet to figure out an options pricing scenario, for example. Scheduling a job simply refers to deciding when your cluster will run the job.
If you are only going to be running one job at a time on your cluster and you are the only user of your cluster, then all of this will seem like unnecessary complication to you. But most clusters are used by at least a few people, and eventually those people always seem to come up with more for the cluster to do than it can do at one time. This is when you need a scheduler.
For example, if you have a 20 processor cluster and you and one team mate submit 6 jobs that each need 5 processors all at the same time, then together you’ve asked for the cluster to do 30 processors’ worth of work. Since you only have 20 processors, this is a problem. Scheduling software allows you to submit as much work as you want at any time to the cluster. It keeps track of how many processors aren’t currently doing work, and how much work there is left to do in the queue (the list of jobs that still need to be worked on). When resources come free, the scheduler just takes the next job off the queue and starts it running.
There are several commercial and free job schedulers out there: Sun Grid Engine, Torque, PBS, Platform LSF, LoadLeveler, Moab, and others. Your cluster provider can help you determine which option is right for you.
MPI: One way processors work together
Many users of small or medium-sized clusters will be using them to run commercial software right off the shelf: programs like the ANSYS multiphysics suite, or even Excel. But if you are developing your own software, you’ll need support for getting the processors to work together in your program.
The essence of parallel computing is the ability to break a big job down into lots of smaller pieces that can (ideally) all be performed at the same time, thus enabling you to get to your final answer faster than if you didn’t break the work up. If you have five folks mowing a yard, the yard gets mowed (roughly) five times faster than if only a single person is doing the mowing.
Read more about the different kinds of parallelism you might have in your applications here.
One of the most common techniques used to get multiple processors in a cluster coordinated and all working together on a single task is the Message Passing Interface, or MPI. Application developers use MPI to write explicit statements in their applications at points where one processor needs to talk to another processor before it can continue to work on the problem. For example, if the problem is to add a very long series of numbers, then each processor can add up a partial sum to create a subset of the numbers. But, the total sum isn’t going to be known until all the processors creating subsets give their part of the answer to the processor assigned the responsibility of adding up all the partial sums – ultimately producing the final answer.
But MPI isn’t necessarily only a consideration for you if you are writing your own software. Many commercial applications use MPI to power their parallelism. In some cases they’ll only work with a specific version of MPI — from a certain vendor, for example, or a version that supports certain calls the application needs. In other cases you can dramatically increase the speed of your application by substituting one version of MPI for another. Your cluster specialist can help you figure out what your best options are here, but you need to know to ask the question.
There are other tools that you may need depending upon how you are going to use your cluster. For example, if you are writing your own code, you will want to make sure you have a compiler and an application debugger installed on your system. There are good free options here (such as the GNU compilers), and there are also excellent tools from companies like Intel, PGI, Microsoft, TotalView, and others for developing and debugging parallel applications.
If none of this makes sense to you, you probably aren’t developing your own applications, so don’t worry about it.