Guy Carpenter recently had a nice briefing on RMS’s movement to version 11 of its North America hurricane model. RMS is the market leader in cat modeling and its changes will influence what reinsurers charge for property covers. That, in turn, will affect the consumer price for homeowners and commercial property insurance.
According to the conventional wisdom – well, RMS’s own wisdom – the new model should decrease losses close to shore while increasing them inland – a bit of a reaction to Hurricane Ike in 2008. Cat costs in Texas and the Carolinas were projected to rise while Florida would catch a break.
In a bit of death by meta-ing, the reality of what the models actually revealed often differed from what RMS said the models would reveal. A company’s Texas portfolio might fail to show the predicted increase while its Florida portfolio would see model losses increase. GC’s brief basically says that RMS projected a broad result and your mileage may vary.
As evidence, GC publishes two graphics that compare the two leading models – RMS’s latest and AIR Classic – in estimating losses in Florida’s portfolio. (GC got the data from the Florida Commission on Hurricane Loss Projection Methodology.)
First up: A table that compares the estimated losses at various return periods:
The left hand column is the return period. So the ‘5’ on the bottom row gives the prediction for the scale of losses to be expected, on average, one year in every five – $3.528 billion for AIR and $3.903 billion for AIR. The range of AIR’s estimate is $3.399 billion to $3.641 billion, and the range for RMS’s estimate is $367 million to $10.601 billion.
(Incidentally, though it’s not crystal clear, I read that range column to be the 95% confidence interval as if to say, “When we ran our model, 95% of the time, the average 5-year storm fell between x and y.”)
The estimates track each other pretty well, within 15% of each other. That’s reassuring in a sense, though others have pointed out that the two companies rely heavily on the same data bases – places like NOAA for meteorology and ISO for insured loss estimates – so the estimates should track each other.
For smaller storms (5-year up to 50-year), RMS’s estimates run about 10% to 15% higher. For larger storms, AIR’s estimate is higher. In all cases, RMS has a broader range. To me, the RMS range feels too broad for the lower return periods. And the AIR range seems way too narrow for the higher return periods.
Next, Guy Carpenter published a map comparing AIR and RMS losses by zip code within Florida:
The more brown a region is, the higher RMS’s estimate is relative to AIR. The more green, the lower RMS’s estimate. So AIR projects higher losses for Miami, in the southeast and the Panhandle, to the northwest. RMS projects much higher through the middle of the state – Fort Myers (Cape Coral on the map), Tampa, Orlando.
In fact, AIR is more than twice as high as RMS near Miami, and RMS is more than twice as high near Orlando. That’s a big difference if you are modeling portfolios concentrated in either of those areas.
So the models agree, generally, in the size of losses given a storm. But they disagree – quite a bit – how those losses would be distributed.
Left unsaid No. 1: RMS and AIR differ quite a bit regarding Florida. And there’s lots of hurricane data for Florida. The models would probably diverge even more in a state like North Carolina, which hasn’t been hit as often, or as hard.
Left unsaid No. 2: Assuming the models are correct at the state level, a lot of business is underpriced. If RMS’s model were perfect (unlikely), then AIR is underpricing risks in the middle of the state. If AIR’s model were perfect (equally unlikely), then RMS is underpricing Miami and the Panhandle.
I emphasize underpricing because in a competitive insurance market, customers flock to the lowest price. For large chunks of the market, that price will be too low.
With cat modeling really in its infancy – it’s only been seriously used for about 20 years – we shouldn’t be surprised that models differ so much. Ideally, companies would run data through multiple models and use their knowledge of each model’s quirks to underwrite the portfolio – the way actuaries run data through several loss projection methods, then select an ultimate.
Of course, modeling everything twice (or three times or four) is expensive. You have to pay for two models, double the size of your cat team, perform twice as many data scrubs. But Carpenter’s analysis implies that it’s worth it.