Supercomputers traditionally have many processors linked together with the goal that multiple CPUs can perform multiple actions simultaneously, thereby greatly speeding-up execution. An added benefit is that multiple CPUs allow for redundancy, which increases the availability (up-time) of the system. Specially built systems from Cray and others are especially effective, but are very expensive because they employ a number of proprietary components.
Clusters, unlike traditional supercomputers, employ only commodity components. They are simply regular PCs connected via a network, such as Ethernet or InfiniBand. The original clusters, which were made from simple x86 desktop machines, were called “Beowulf” clusters, though that term has fallen out of use.
Modern clusters feature PCs with a modified chassis that fits within a rack. The size of the chassis is measured in “U” (equal to 1.75 inches) to designate the vertical height. The individual computers (called “nodes”) are slid into the rack so that anyone of them may be physically retrieved for repairs or upgrades without disrupting any of the other nodes.
Often the cluster is managed via special software like Rocks or OSCAR, and is programmed with MPI.
Because of the reliance on cheap hardware and free software, clusters are often viewed as supercomputing for the masses, though they do present a number of challenges that prevent greater adoption as detailed below.