Modeling Disaster

Catastrophe modeling is growing, propelled by IT and a really disastrous Q1.

“It’s surprising anyone can get any sleep,” says Bill Keogh, president of Eqecat, a global catastrophe risk modeling provider and consultancy. “There’s a lot out there that we don’t know about.”

Keogh is referring to the March 11 Tohoku earthquake and tsunami that devastated Japan. Aside from its magnitude, the catastrophe was notable for its “gray swan”-like quality. Scientists knew a quake in that area was likely, explains Miaki Ishii, associate professor of earth and planetary science at Harvard University’s Harvard Seismology Group. But they didn’t think the entire Tohoku fault, which was believed to be broken into pieces, “could move all at once.” Yet it did.

Despite the gap between what was expected in Japan and what actually happened, the availability of previously inconceivable amounts of computing horsepower is enabling insurers to develop catastrophe models that can narrow the divide between what humans can imagine and what nature can do.

“Our [insurance] clients use multiple servers and run billions of calculations, depending on the size of their portfolio,” says Keogh. “Imagine you have a million records in a portfolio and you simulate 10,000 possible events. That’s 10 trillion possibilities. And for each policy and location you have to estimate the loss to the deductible and/or the loss to reinsurance. That’s a lot of number-crunching.”

Before insurers start their computers crunching, they license a catastrophe model from a provider such as Eqecat, Risk Management Solutions (RMS), or AIR Worldwide (the Big Three of global catastrophe modelers). The insurers put into the model each and every property insured in a specific geography, along with details pertaining to potential loss of life, business loss, and so on. The model then runs simulations of what could possibly happen to all those policies in thousands of disaster scenarios.

Constructing these simulations requires an enormous amount of computational juice. For example, AIR currently is modeling 10,000 years of weather to build a probabilistic model for flooding in Germany. “At best, we have 100 years of historical data,” says Jayanta Guin, AIR’s senior vice president of research and modeling. “That’s not enough.” So instead of simply using the data from those 100 years to make a model, AIR ran 10,000 possible permutations of that historic data to provide what Guin calls “the full universe of possible outcomes.” To do that, AIR ran its program nonstop for six months on its own computing grid. It would take 18 months of run time to do a similar simulation for the United States, estimates Guin. (Not surprisingly, in view of those resource-intensive run times, the company is exploring the possibility of leveraging the computing power of independent clouds.)

The catastrophe modeling industry is about 25 years old, says Guin, and back then “a gigabyte [of data storage] was unthinkable. Now we’re talking terabytes. This allows us to analyze risk in much more detail. Back in the ’90s, we analyzed risk at a county level. Now every single house is quoted individually.”

Discuss

Your email address will not be published. Required fields are marked *