Sarasota Herald-Tribune reporter Paige St. John is questioning the clothing choices of cat modeling emperor RMS in a well-sourced series that makes a lot of good points if you can get past all of the insurance bashing. However, the case she proves is not the one I think she wants to make.
She accuses the leading cat modeling firm, RMS, of basically gaming its own black box, the computer model that projects losses from major storms, most famously hurricanes. Insurers use models like those of RMS to project how much it should cost to insure their portfolio of risks. Since RMS is the No. 1 cat modeling firm, any bias in its models will skew homeowners’ rates in cat-prone states like Florida.
She also notes that the models suffer from GIGO syndrome. The data fed into them can be notoriously inaccurate, so the estimates they spit out could be unreliable. Most famously, the Mississippi casinos washed away by Hurricane Katrina were basically floating barges, but were geocoded as traditional anchored building. When Katrina struck, they tossed about like mobile homes, creating losses far greater than any model could have anticipated.
She also accuses the insurers and reinsurers of playing along with the cat modeling game, as it conveniently led to higher rates.
The reporter’s crusading seems to conclude that the system was rigged to overcharge Floridians. She is reluctant to say so, since the facts don’t inexorably lead there.
For example, she excoriates market leader RMS’s practices on one hand, but also is highly critical of cat models that don’t require data at the street level. (The more detailed the data is, the more accurate the prediction.) However, RMS is the market leader precisely because its models require the most detailed profile of a risk in order to estimate losses. Its rivals lag on precision.
And she conveniently ignores a timeline that any good investigative journalist assembles.
In her telling, RMS, seeing opportunity in the wake of Katrina, throws together for four hours some cherry-picked hurricane experts to conclude that storm frequency and severity are rising. The panel gives RMS the oomph to rejigger its models to predict a bevy of intense hurricanes between 2006 and 2010.
The industry, true, had a hurricane problem. But it wasn’t just Katrina.
In 2004, it was Hurricane Charley ($8.5 billion in 2008 dollars). And Frances ($5.2 billion). And Ivan ($8.1 billion). And Jeanne ($4.2 billion).
In 2005, yes, it was Katrina, with $45 billion in losses. But it was also Rita ($6.2 billion). And Wilma ($11.4 billion). And Wilma was the most intense hurricane ever – worse than Katrina, worse than Andrew, worse than Camille, way back in 1969.
And it was Hurricane Beta, because there were so many storms in 2005 that we ran out of alphabet. We had storms named Alpha, Beta, Gamma, Delta, Epsilon and Zeta.
St. John asserts that, with satellites and more air travel, we are much better at seeing storms, so we find ones that would have been unnoticed in, say, 1933. And that is a reasonable point, though I respectfully disagree.
However, to assert, as she does, that State Farm and Allstate in 2006 filings relied solely on RMS to project increased hurricane frequency is to ignore unprecedented activity in the two years preceding. Seven of the 10 most expensive hurricanes ever occurred in a 15-month period. If you were State Farm, wouldn’t you see a trend there?
You can’t write about the history of cat modeling and ignore the hurricanes of 2004 and 2005.
I can’t be charitable to the reporter and blame it on oversight. A reporter in Florida and not knows hurricanes; it’s your job. I can say this definitively because I was a reporter in Florida (in the late ’80s), and I knew about Hurricane Donna, which occurred a quarter-century earlier.
Nevertheless, I agree that hurricane models have been biased through the years. They have been too low. I’ve think I’ve written before about the study circa 1990 that predicted a Force 5 hurricane could blast Miami, Tampa and the Eastern Seaboard and cause a paltry $7 billion in damages. (Today a repeat of the 1926 Miami hurricane would itself cause $100 billion in damages, according to AIR, an RMS rival.)
The models are in their infancy, so they miss stuff. Before Hugo, the models didn’t realize that old trees could crash into houses. And before Andrew they didn’t know that price gouging would increase repair costs. And before Katrina, they didn’t realize that flooding was a covered peril for commercial risks.
In 2004 and 2005, the insurers that relied on cat models to set their loss estimates were red-faced a quarter later, when they had to revise upward. Then, as now, cat models are not used to estimate losses after a storm has hit. That is damning to the quality of the models, but it also shows the downward bias the models have had.
So if there’s a hurricane, here is how the estimate happens: Insurers talk to their claims people. Reinsurers talk to their brokers – who are talking to the insurers.
So, the models aren’t perfect, I agree. I also agree that insurers shouldn’t over-rely on one cat model, no more than actuaries over-rely on one reserving method. The humble TV weatherman forecasts based on three computer models and he has tens of millions of dollars less at stake than the typical p/c insurance company.
But the reporter, churning out a few thousand words on the topic, seems not to have found a better way to forecast the frequency and severity of hurricanes. Models weren’t adopted because insurers wanted to toss a couple million at a bunch of California gearheads.
Models became popular after Hurricane Andrew demolished the traditional way hurricane losses were baked into a rate – the ISO excess windstorm method. That method failed to look back more than five to 10 years and failed to take into account growing coastal exposures, among other shortcomings.
Ultimately, though, the stories emphasize what the insurance industry already knows. It is taking on a tremendous amount of risk in cat-prone areas like Florida. And if the models are flawed, it may be taking on more risk than it knows. But this lack of knowledge actually argues for higher rates, not lower ones, as the reporter seems to suggest. The riskier the coverage, the higher the rate.
The models aren’t perfect. Like any human process, they are better than what preceded them but not as good as what will follow.