Over at Guy Carpenter land, actuary Jessica Leong posits that the bootstrapping method of estimating reserve variability understates that variability.
Bootstrapping, of course, is a method of using the company’s own data to drive a random sampling process; said process re-estimates the reserve. Sample enough times, keep track of the results, and you’ve got a distribution of results. (Better descriptions are here.)
But the method doesn’t seem to work as advertised.
Leong uses company Schedule P homeowners data. Schedule P gives a triangle of paid losses and also ultimate losses. And homeowners has a short tail but produces 10 years of results. The result 10 years out is an excellent estimate of ultimate.
So you have an a priori distribution of losses. And you know, 10 years out, what the actual losses turned out to be. So you can pinpoint where in your original distribution the actual losses came in. In her post, Leong walks through an example, which happens to come in at the 91st percentile.
If you did this across a bunch of companies, across a bunch of years, the actual losses should occur uniformly, at least if bootstrapping works as advertised. So 10% of outcomes would fall within the bottom tenth percentile, another 10% would fall within the top tenth percentile, and so on.
The actual results are here:
The chart shows that
. . . around 20 percent of the time, the actual reserve is above the 90th percentile of the bootstrapped distribution, and 30 percent of the time the actual reserve is below the 10th percentile of the distribution.
When you tell management the 90th percentile of your reserves, this is a number they expect to be above 10 percent of the time. In reality, we find that companies have exceeded this number 20 percent of the time. The bootstrap model is under-estimating the probability of extreme reserve movements, by a factor that is clearly material for the purposes of capital modelling and therefore Solvency II.
Hers is the first post in a series to play out over the coming days. In the meantime, I’m a bit curious:
- Do the outliers have much in common with each other? Are they smaller companies or larger ones?
- The study works from net data. Would you get the same results gross of reinsurance? Schedule P triangles are net of reinsurance, of course, but you can construct a gross triangle from successive annual statements. Net triangles could be skewed by the presence of reinsurance, especially excessive of loss or catastrophe cover.
- Does the data set include reinsurers? Reinsurers don’t always receive – and rarely record – losses by accident year. Usually everything is recorded on an underwriting year. So the accident year loss payments and ultimates in Schedule P homeowners are estimates arrived at by allocating underwriting results across accident years. And, believe me, the allocation can be shot full of holes.
- Do other lines of business exhibit the same phenomena? Homeowners is short-tailed, but the presence of catastrophes skews development and results. Would you see the same phenomenon in, say, private passenger auto?