A grid is a wide-area network of computers used to process parts of the same problem. Unlike a cluster, grids tend to be loosely coupled in that there usually is no central control. The notion of grid computing essentially came from the observation that most PCs are underused, which means that their computing powers could be shared and combined to form a supercomputer on-the-fly.
The technology enabler for grid computing tends to come from open standards. The BOINC project (stemming from the famed SETI@home experiment) uses HTTP to communicate with PCs. HTTP is widely available and is excused by most firewalls, making it an ideal choice for messaging. There are a number of other software packages available, including Globus and Condor.
The goal of grid computing is to basically provide massive computing resources at a low cost by galvanizing volunteered machines all over the world. In practice, grid computing hasn’t really lived up to its promise. For one thing, the “low cost” moniker is hardly correct when governments have spent millions in grants for organizations to build the software. Secondly, many people who volunteer their computing time usually do so for specific projects, which removes most of the resources that were “supposed to be there.”
And third, the types of problems that work on these infrastructures tend to be large (taking months or years to run) and must be divisible into many independent units (because the communication overhead would otherwise be too much); this description actually applies to very few problems. Given these issues, in particular the last one, most users would be better off with a cluster rather than attempting to configure grid resources.
One positive from the grid community has been the push of web services. Web services are like RPCs for the Internet. While this technology is still relatively new, it appears that many websites will move in the direction of offering interoperability as part of the collaborative nature of the Internet.
There is one more note about the “grid” term. Several computing firms, including IBM, HP, and Sun, offer computing time that can be rented. This used to be called “on-demand” computing but has now taken the grid moniker. The idea is that computing should be a utility like electricity; rather than own a generator, the customer merely pays for exactly as much product he needs while a company of professionals manage the resources. So far this idea has yet to catch on with end users, probably because of the comparisons to time-shared facilities and dumb terminals. This is the same reason that personal supercomputers will likely become more popular in the future.