• Technology
  • The Economist

Computing Power on Tap

A look at the most ambitious attempt yet to combine millions of computers seamlessly around the world -- to make processing power available on demand anywhere, rather like electrical power.

Beowulf’s success depended not so much on the architecture of the computer — many commercial supercomputers are based on large arrays of fairly standard processors — but more on the price/performance ratio of the network technology. Supercomputers require proprietary “glue” to link their processors together, and this is both expensive and time-consuming to develop. Beowulf-like systems tend to use fast but affordable Ethernet technology. Some of Beowulf’s offspring, referred to as “commodity clusters”, now rank among the top 50 of the world’s fastest supercomputers — offering speeds up to 30 teraflops per second. They can often cost less than a hundredth of the price of an equivalent supercomputer. Cluster construction, which started as a nerdy hobby, is now a mature industry with turnkey solutions offered by manufacturers such as Compaq and Dell.

The intellectual link between Beowulf and the Grid is that, as transmission speeds on the Internet increase, clusters no longer need to be in the same room, the same building, or even the same country. In some sense, this is old news. A software system called Condor, devised by Miron Livny and colleagues at the University of Wisconsin in Madison during the 1980s, combined the computing power of workstations in university departments. With a Condor system, researchers can access the equivalent of a cluster of several hundred computers.

In a similar way, a number of European supercomputer facilities were linked together in the late 1990s as part of a project called Unicore that was run by a German research consortium. Using Unicore software, users can submit huge number-crunching problems without having to worry about what operating software, storage systems or administrative policies will be used on the machines that do the work.

Between them, SETI@home, Beowulf, Condor and Unicore all contain elements of what the Grid’s visionaries are after: massive processing resources linked together by clever software, so that, from a user’s perspective, the whole network melds into one giant computer. To emphasise this, the latest extension of the Unicore project has been dubbed E-Grid. To purists, however, this is only the beginning. They believe that Grid technology should blur the distinction between armies of individual P2P computers, dedicated commodity clusters, and loose supercomputer networks. Ultimately, a PC user linked to the Grid should not need to know anything about how or where the data are being processed — just as a person using a toaster does not need to know whether the electricity is coming from a wind farm, a hydroelectric dam or a nuclear power plant.

The Missing Link

Piecemeal solutions to building such a Grid are already at hand. A layer of software, called “middleware”, is used to describe the kind of tools needed to extract processing power from different computers on the Internet without any fuss. The most popular middleware so far is the Globus tool-kit developed by Mr Foster’s group at Argonne, in collaboration with Carl Kesselman’s team at the University of Southern California in Los Angeles.

Discuss

Your email address will not be published. Required fields are marked *