Archives for March 2006

What is the Partitioned Global Address Space model?

To reconcile the parallelism in both tasks (OpenMP) and data (HPF), a different programming model has emerged that claims to have the best of both. It is the partitioned global address space model, and has been applied a variety of languages, the most widely used being Unified Parallel C (UPC). In UPC, by default, all […]

What is grid computing?

A grid is a wide-area network of computers used to process parts of the same problem. Unlike a cluster, grids tend to be loosely coupled in that there usually is no central control. The notion of grid computing essentially came from the observation that most PCs are underused, which means that their computing powers could […]

What is data-parallel programming?

In the task-parallel model represented by OpenMP, the user specifies the distribution of iterations among processors and then the data travels to the computations. In data-parallel programming, the user specifies the distribution of arrays among processors, and then only those processors owning the data will perform the computation. In OpenMP’s master / slave approach, all […]

What is a personal supercomputer?

The term “personal supercomputer” is a marketing ploy that has been applied to a number of higher performance systems. Apple, for example, advertised its PowerMac G4 (a uniprocessor, no less) in this fashion. However, there are some more legitimate uses of the term. Tyan recently announced its own personal supercomputer under the Typhoon banner; that […]

What is virtualization?

Virtualization is a technique that allows multiple operating systems to run simultaneously on the same hardware. Consider, for example, a fault-tolerant server in which several instances of Linux are executing. If a fault happens to wipe out one of those instances, the rest keep running unabated. In this regard, virtualization is to operating systems what […]

What is a cluster?

Supercomputers traditionally have many processors linked together with the goal that multiple CPUs can perform multiple actions simultaneously, thereby greatly speeding-up execution. An added benefit is that multiple CPUs allow for redundancy, which increases the availability (up-time) of the system. Specially built systems from Cray and others are especially effective, but are very expensive because […]

What are the major challenges to the future of HPC?

The computing industry has grown at an exceptional pace over the past four decades. Moore’s Law about doubling computer speeds every eighteen months will probably hold for another decade or two. After that, transistors will be about the size of atoms and thus will not shrink anymore. This is the first roadblock to continuing performance […]

What is loop-level parallelism?

Most high-performance compilers aim to parallelize loops to speed-up technical codes. Automatic parallelization is possible but extremely difficult because the semantics of the sequential program may change. Therefore, most users provide some clues to the compiler. A very common method is to use a standard set of directives known as OpenMP, in which the user […]

Who uses high-performance computing?

The two major classes of users are technical customers and enterprise customers. The former comprises scientists and engineers who need to perform number crunching; examples include climate prediction, protein folding simulations, oil and gas discovery, defense and aerospace work, automotive design, financial forecasting, etc. The later category encompasses the corporate data center that stores customer […]

What is HyperTransport?

Traditionally, computing devices are connected together in a bus, the most popular standard of which will soon be PCI Express. A bus-based architecture tends to have poor latency, though for most applications this is acceptable. Sometimes, however, when the application has more processes than data, numerous small messages tend to be a major factor in […]