At Edgar Online, a $15 million (in revenues) supplier of public-company data, CFO and COO Greg Adams is doing more than that. He reviews his company’s written disaster-recovery plan in detail each year. Adams is also apprised of changes in the plan before he files the company’s 10-Qs. “Disaster recovery is critical for us,” notes Adams. “If we’re down, a lot of money is lost.”
After the events of September 11, management at the South Norwalk, Connecticut-based company decided to construct a remote hot-site in Rockville Center, Maryland. The site, which has a backup generator, can restore the company’s main systems in a matter of hours. Edgar Online also maintains a New York-based fail-over for its Website (as the name implies, the fail-over immediately kicks in if the Website fails).
The system was put to the test last August, when the huge power outage knocked out the electricity at Edgar Online’s Rockville office. “During the blackout,” recalls Adams, “we had no downtime.”
Other companies were not as fortunate. Atlanta-based Delta Air Lines, which maintains an extensive disaster-recovery and business-continuity plan (including backup generators for its main and remote sites), was able to keep its planes running and its ticket systems operational during the power outage. But according to Keith Hansen, manager of emergency-response and business-continuity planning at the airline, Delta passengers at a number of airports couldn’t board their flights after the power went out. The reason? Unlike the well-prepared Delta, some airport security systems didn’t have backup generators. “We’re now looking at hub and major airports,” notes Hansen. “If they don’t have a backup [power] system for security, we try to convince them to get one.”
The summer blackout exposed shortcomings in other disaster-recovery plans, as well. Many businesses, for example, discovered that their remote sites simply weren’t remote enough. “It’s all right to have a backup center,” says Lance Travis, vice president of core research at Boston-based consultancy AMR Research. “But if you’re in the same power grid, it doesn’t do you any good.” Moreover, a fair number of companies found that their uninterrupted power sources were designed to run for only a few hours. Now, says Travis, some corporations are looking for remote sites that are so far away they can avoid almost any blackout.
Such a strategy, while prudent, can constrain the amount of data that gets backed up. Delta, for one, performs synchronous backups from a mainframe to a remote site. That’s a massive dumping of data — and one that limits the distance between the company’s remote site and its main data center. As Ray Shepherd, coordinator for business-continuity planning at Delta, explains: “You can push that amount of data only so far.”
Coming: More Bad Stuff
Experts believe that increased bandwidth and better compression technology will ease the problem. Already, Connected can shoehorn the information from 15,000 PCs onto one NT server, a fairly remarkable achievement. But supply is barely keeping up with demand. The fact is, companies are producing prodigious amounts of data these days, a trend that shows no sign of abating. “Ten years ago, people were running businesses off what you can get in a laptop today,” says Omgeo’s Foster. “Now we’ve got terabytes of data.”