• Technology
  • The Economist

Computing Power on Tap

A look at the most ambitious attempt yet to combine millions of computers seamlessly around the world -- to make processing power available on demand anywhere, rather like electrical power.

Imagine that every time you plugged in a toaster, you had to decide which power station should supply the electricity. Worse still, you could select only from those power stations that were built by the company that made the toaster. If the power station chosen happened to be running at full capacity, no toast. Replace the toaster with a personal computer and electrical power with processing power, and this gives a measure of the frustration facing those who dream of distributing large computing problems to dozens, hundreds or even millions of computers via the Internet.

A growing band of computer engineers and scientists want to take the toaster analogy to its logical conclusion with a proposal they call the Grid. Although much of it is still theoretical, the Grid is, in effect, a set of software tools which, when combined with clever hardware, would let users tap processing power off the Internet as easily as electrical power can be drawn from the electricity grid. Many scientific problems that require truly massive amounts of computation — designing drugs from their protein blueprints, forecasting local weather patterns months ahead, simulating the airflow around an aircraft — could benefit hugely from the Grid. And as the Grid bandwagon gathers speed, the commercial pay-off could be handsome.

The processor in your PC is running idle most of the time, waiting for you to send an e-mail or launch a spreadsheet or word-processing program. So why not put it to good, and perhaps even profitable, use by allowing your peers to tap into its unused power? In many ways, peer-to-peer (P2P) computing — as the pooling of computer resources has been dubbed — represents an embryonic stage of the Grid concept, and gives a good idea as to what a fully fledged Grid might be capable of.

Anything@home

The peer-to-peer trend took off with the search for little green men. SETI@home is a screen-saver program that painstakingly sifts through signals recorded by the giant Arecibo radio telescope in Puerto Rico, in a search for extraterrestrial intelligence (hence the acronym SETI). So far, ET has not called. But that has not stopped 3m people from downloading the screen-saver. The program periodically prompts its host to retrieve a new chunk of data from the Internet, and sends the latest processed results back to SETI’s organisers. The initiative has already clocked up the equivalent of more than 600,000 years of PC processing time.

The best thing about SETI@home is that it has inspired others. Folding@home is a protein-folding simulation for sifting potential drugs from the wealth of data revealed by the recent decoding of the human genome. Xpulsar@home sifts astronomical data for pulsars. Evolutionary@home is tackling problems in population dynamics. While the co-operation is commendable, this approach to distributed computing is not without problems. First, a lot of work goes into making sure that each “@home” program can run on different types of computers and operating systems. Second, the researchers involved have to rely on the donation of PC time by thousands of individuals, which requires a huge public-relations effort. Third, the system has to deal with huge differences in the rate at which chunks of data are processed — not to mention the many chunks which, for one reason or another, never return.

Discuss

Your email address will not be published. Required fields are marked *