Brief: Imagine a global electricity grid that marshals resources and reroutes power to Hong Kong during rush hour when farmers in Des Moines turn off the lights and get ready for bed. Or a system that sends power to an aluminum smelter in British Columbia, when Australian miners called it a day. That’s the asset management theory behind grid computing, a technology some experts say will revolutionize back-end computing in the same way the Internet revolutionized front-end computing.
What It Is: Grid computing is a massive smoothing technique for asset allocation, says Paul Hill, vice president of marketing and business development for Markham, Ontario-based Platform Computing Inc., a maker of distributive computing software. In grid computing, PCs, servers, and workstations are linked together — one giant daisy chain, if you will. That linking of systems means computing capacity is never wasted. Moreover, it doesn’t matter if the linked machines are located across the hall — or across the pond. It still works.
Skinny: Once only the purview of engineers and researchers who needed to harness extra computing power to run gargantuan computations (human genome project or Defense Department math, for example), grid computing is now used in commercial settings. Life sciences and biotechnology companies rely heavily on grid computing. Ian Foster, a researcher in Argonne National Laboratory’s math and computer science division (and a computer science professor at the University of Chicago), says that he expects the technology to move beyond those sectors within the next five years — if not sooner.
Already, IBM, Sun Microsystems, Platform Computing, and a few smaller startup companies claim to have hundreds of corporate grid computing customers. Foster thinks that the real appeal of grid computing for CFOs is that the technology makes more efficient use of existing assets.
Others point out that grid computing is generally inexpensive, simple to deploy, and doesn’t require replacement of existing systems. Foster says most corporate IT departments start with inter-company grid computing, then move to partner grids — usually tied to the supply chain.
Eventually, international grid-computing standards will open the way to a GGG (great global grid) which will form one huge virtual computer. The first layer of GGG standards are due out by year’s end. But the GGG is “years away,” says Platform Computing’s Hill.
While the GGG is appealing, experts say grid computing is really about getting a better return on existing assets by increasing productivity. That, Hill says, enables companies to bring products to market faster. One example: an automotive design engineer in Detroit borrows computing power from a New York-based marketing manager’s PC when he’s out to lunch. The combined processing power of the two machines enables the engineer to run through design changes faster .
Experts say grid computing should be of real use to companies that do a lot of number crunching, particularly those that operate data warehouses. Indeed, Foster points out that the voluminous amount of data currently used by some grid-computing astronomers is equivalent to the tons of marketing information kept in the virtual vaults of big retailers.
Other possibilities abound. Hill believes investment bankers (who run daily computations on currency and commodity hedges) and corporate treasurers (who calculate cash and investment positions) are obvious candidates for grid computing. He also notes that grid computing facilitates the sharing of software licenses. Moreover, grid systems can be prioritized — that is, the additional processing power can be made available first to managers working on crucial projects or business processes. For instance, a corporation that is running up against an SEC filing deadline might chose to give its finance department preferential access to grid processing power.
There are some snags, though. Some Internet connections may not have sufficient bandwidth to handle things like the sharing of x-rays or crunching huge data sets. And deciding which departments or projects merit priority access to additional computing power might spark turf wars among workers. Certainly, employees in a department that doesn’t get priority access might conclude that their projects doesn’t rate with management — which could dampen worker enthusiasm.
ETA: Two years.