Category Archives: Uncategorized

Your best swim team

Bryan Burke, the guy who does NFL analytics for The New York Times, discusses how he uses an optimization routine based on genetic code to seed swimmers in a dual meet. He does a nice job of breaking down the complexity, but it’s still intricate reading. His is something you want to enjoy in its entirety or ignore completely.


Nanotechnology and insurance

A piece I worked on from the CAS Spring Meeting (though I hope I spelled spring right):

Casualty actuaries can help insurers meet the demand for new products by identifying, analyzing and pricing new risks, according to two speakers at the srping meeting of the Casualty Actuarial Society.

Since the mid-1980s, casualty insurance has shrunk as a share of gross domestic product. Since then, insurers have been able to push liabilities out of the insurance space, said Parr Schoolman, a fellow of the Casualty Actuarial Society and a senior managing director at Aon Benfield. For example, the multi-billion-dollar BP oil spill has been primarily borne by the oil industry.

While that has helped short-term profits, it hurts long-term growth, he said. Many of the 40 largest insurers in the 1980s have disappeared in a bevy of mergers and a few financial flameouts.

To survive, he said, insurers need to learn how to manage new risks, not just exclude them. Actuaries can help, he said, by encouraging companies to use enterprise risk management techniques.

PropertyCasualty360 rewrote the article and added some original reporting. Bob Hartwig and I wrote about nanotech a couple of years ago here (see page 36ff).

It’s a hit!

its a hit 4 5 13I helped develop Insurance Journal’s No. 2 best-read article for the CAS.

The entire release is here.

ERM works. JPMorgan trades prove it.

Well, that’s not the obvious takeaway from today’s story about the London Whale trades:

JPMorgan Chase, the nation’s biggest bank, ignored internal controls and manipulated documents as it racked up trading losses last year, while its influential chief executive, Jamie Dimon, briefly withheld some information from regulators, a new Senate report says.

But when you dig into the New York Times story, you quickly learn that the ERM process worked quite well:

As the traders in London assembled increasingly complex bets, JPMorgan ignored its own risk alarms, according to investigators. In the first four months of 2012 alone, the report found, the chief investment office breached five of its critical risk controls more than 330 times.

The Senate report that triggered the Times story mucks through the grubby detail:

The [Chief Investment Office] used five metrics and limits to gauge and control the risks associated with its trading activities, including the Value-at-Risk (VaR) limit, Credit Spread Widening 01 (CS01) limit, Credit Spread Widening 10% (CSW10%) limit, stress loss limits, and stop loss advisories. During the first three months of 2012, as the CIO traders added billions of dollars in complex credit derivatives to the Synthetic Credit Portfolio [aka the SCP], the SCP trades breached the limits on all five of the risk metrics. In fact, from January 1 through April 30, 2012, CIO risk limits and advisories were breached more than 330 times.

The system was well-designed, and it was setting off the proper alarms.

But alarms only make noise. And JPMorgan did what any groggy soul does after a bender. It ignored the alarm:

The SCP’s many breaches were routinely reported to JPMorgan Chase and CIO management, risk personnel, and traders. The breaches did not, however, spark an in-depth review of the SCP or require immediate remedial actions to lower risk. Instead, the breaches were largely ignored or ended by raising the relevant risk limit.

Actually, that’s not fair. Morgan management didn’t ignore the alarms. They just built a new clock – here, a new VaR model. Rather famously, the new model, built in haste, sucked.

The Senate report offers a better metaphor, perhaps, via Achilles Macris in JPMorgan’s London office. He likened the synthetic credit portfolio to flying an airplane in its marvelous complexity. That would make the ERM reports like flight instruments, noted the Senate report. (Guess that’s why it’s called an ERM dashboard, duh!) Still, you can just picture the dials spinning wildly while the flight crew insists the plane is on course.

So the risk management system in place – the models, the reports – all worked just fine. All they can do is set off alarms, and they set off a lot of them, loud ones.

But no one was listening.




I’ll be at the CAS Ratemaking and Product Management seminar in Huntington Beach, Calif., next week.

I have to parachute in to cover a couple of talks, then leave the same day, so not much schmoozing for me.

I also downloaded the Android app for the meeting. There is an itunes app as well. They let you see the entire schedule and select the sessions you prefer.

It’s a really handy item. I used something similar at the Annual Meeting last fall.

Here is how you can download a copy.

Something new

Glance around and you’ll notice some changes to the blog.

I changed the Pages section to include some stuff I’ve written over the past couple of years. Since research and writing is a big part of my job these days, these are advertisements. I also removed some old pages.

I eliminated the word clouds of Categories and Tags. I haven’t been categorizing and tagging posts for a while, so these weren’t useful.

I’ve added links to recent posts and to monthly archives. I’ve also added “Top Posts & Pages,” which WordPress calculates using an algorithm unknown to me. For example, a top post today is a link to news headlines a year ago. Somebody’s spam generator got stuck there, I suppose.

I also plan to overhaul the appearance, so be not surprised if you click over here and see a radically different design.

Just trying to keep things fresh.


GuyCarp on Reserve Development

Actuary Jessica Leong at GCCapitalIdeas says the redundancy well is about dry:

The cycle turned on an accident year basis in 2004. Industry-wide accident year deterioration now appears imminent.


Guy Carpenter’s expectation is that the U.S. P&C industry will continue to release reserves, but that 2012 reserve releases will be less than the 2011 release. Although we expect accident years from 2011 will show reserve deterioration, we believe accident years 2010 and prior will continue to release reserves, and this should offset any deterioration in financial years 2012 and 2013. The industry may therefore only start to see deteriorating reserves in 2014 financials or beyond.

Sounds like the industry-wide prediction is:

  • AY2010 and prior will continue to release reserves.
  • AY2011 estimates of ultimate will rise.
  • AY2012 and AY2013 will be booked too low.
  • Last calendar year and this (2012 and 2013) will show favorable reserve development. By CY2014, though, the deterioration in AY2011 and subsequent will be greater than the takedowns in AY2010 and prior.

Industry results will be available via SNL Financial in a couple of weeks, but through third quarter 2012, we can say:

  • Overall reserve development has been favorable by about $8.5B
  • AY2011 reserves have been reduced by $6.1B.

For CY2011, BTW, reserve development was a favorable $12.1B.

Here’s a chart showing how favorable development has broken by accident year:

favorable dop by year thru q32012

How RMS came up with its estimate for Sandy

Hemant Shah of RMS discusses how his company came up with its estimate ($20B to $25B) for superstorm Sandy, which has held up well as industry losses have taken shape. Takeaways:

  • Sandy was “not a textbook hurricane: [It had] an unusual track; a large, diffuse and transitioning windfield; and a catastrophic storm surge. . .”
  • The storm had less than hurricane winds at landfall, but the storm surge was equivalent to a Category 2 storm. Shah credits an overhaul of RMS’s storm-surge model in v11.
  • Lots of on-site research to get an idea of how well reality synced with the model.
  • Increased precision: “. . .we dynamically modeled the surge street-by-street, distinguishing the flood risk by property based on the elevation and proximity of each building to the water’s edge. Complex interrelationships between extensive power outages, disruption from flooding, widespread coastal property damage and the closure of transportation systems provided additional insights, as did other factors driving potential post-event loss amplification.”

Paying the chargemaster

Blogosphere is all over Steven Brill’s long, long (>20K words) article about medical bills. In reality, the story is pretty short, but it says something profound about the role of a health insurer.

If you have health insurance you are familiar at least with the outlines of a claim:

  1. Your medical provider bills you for a ridiculously large amount, like $240 for a single session of physical therapy.
  2. Because you have insurance, the provider agrees to settle for less, say $88.
  3. The insurance company pays part of the amount in Step 2 above, say $26.40.
  4. You are responsible for the remainder. Here that would be $61.60.

Brill focuses on price No. 1, which he calls the chargemaster. If you have insurance, the chargemaster is meaningless. Recognizing how absurdly high it is, I call it the Fairy Unicorn Price – FU Price for short.

But if you don’t have insurance, that’s the price you pay.

Much of Brill’s article is example after example of the absurdity of the chargemaster. One woman thought she had a heart attack, but it was a false alarm. She was billed $21,000. Another woman fell down in her backyard and was billed $9,400. And he’s unhappy, of course, that the uninsured – presumably the least able to pay – get billed the most.

I have to agree. We had a close brush with the chargemaster.

My wife broke her elbow last Aug. 4. A couple days earlier (Aug. 1) our insurer changed Liz’s policy number. Unfortunately, the insurer waited till Aug. 8 to tell us what they had done.

That meant Liz’s ER bills were all rejected – a pretty big deal because she ended up spending a night in the hospital after surgery. We were able to clear up the billing SNAFU, after a couple months. But it looked, however briefly, like we would have to pay the chargemaster.

And it meant I kept very, very close tabs on medical expenses. Last week, we paid the final bill. (I think.) All up, the chargemaster hit us for $72,346.66. We paid $4,529.14. Insurance paid $3,239.88. The rest was the insurer-negotiated discount. The chart tells the story.


This chart implies that the main value of a health insurer is as a price negotiator. In our case, the insurer’s indemnification was worth $3,200. Its negotiating power was worth almost 20 times as much, winning us a discount of almost $65,000.

Brill, I think, would disagree. And he argues that insurance companies like the chargemaster “because they can then make their customers feel good when they get an Explanation of Benefits that shows the terrific discounts their insurance company won for them.”

He makes the entirely valid point that the chargemaster price is absurdly high. Hospital reps he talked to agree.¹

Heck, I agree.  Who could argue? That’s why I call it the FU Price. No way a broken elbow should cost $70,000.  But had we lacked insurance, that’s the bill we’d be negotiating.

And its hard to negotiate down from the chargemaster. Brill talks to medical-billing advocates – people who bargain against the chargemaster for a living. Rarely do they get a price as low as an insurer’s.  That $9,418 slip-and-fall chargemaster, for example, only got reduced to about $8,900.

So these days, a health insurer’s ability to negotiate is more important than its willingness to pay claims.


¹ The hospital reps said the chargemaster was just an opening bid toward negotiating a final payment. Of course, it’s an odd negotiation wherein the patient is at a disadvantage. He is forced to make a counteroffer after he has signed a contract to pay what he has been billed.


The spreadsheet did it.

This goes back to the London Whale’s multibillion-dollar botch at JPMorgan last year. Recall that Morgan had just implemented a new Value at Risk model. Using it, the bank underestimated VaR, which helped trigger the debacle.

So what happened? Excel happened:

The new model “operated through a series of Excel spreadsheets, which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another.” The internal Model Review Group identified this problem as well as a few others, but approved the model, while saying that it should be automated and another significant flaw should be fixed. After the London Whale trade blew up, the Model Review Group discovered that the model had not been automated and found several other errors. Most spectacularly,

“After subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a factor of two and of lowering the VaR . . .”

The Baseline Scenario blames Excel’s shortcomings, rightly noting . . .

. . . while Excel the program is reasonably robust, the spreadsheets that people create with Excel are incredibly fragile. There is no way to trace where your data come from, there’s no audit trail (so you can overtype numbers and not know it), and there’s no easy way to test spreadsheets, for starters. The biggest problem is that anyone can create Excel spreadsheets—badly. Because it’s so easy to use, the creation of even important spreadsheets is not restricted to people who understand programming and do it in a methodical, well-documented way.

True as far as it goes. But it also shows an epic ERM failure. If you are going to pin a company’s future on a model, said model should be thoroughly tested. A model that doubles or halves the old model’s calculations should be presumed to be incorrect, then checked, double-checked and triple-checked to make sure that it’s calculating correctly. There should also be some theoretical and heuristic support for why the old model’s calculation was inferior and why the new model is superior.

Further, a model that critical to a company that large just has to be more robust. It should use code to extract and transfer data from sheet to sheet. It should be engineered to minimize manual inputs. All of those inputs should take place on one screen. These things should be in place before you change models. You shouldn’t be able to catch up on the upgrade.

And responsibility for all that should go to the highest level at the company, here CEO Jamie Dimon.

I’ve seen companies one-sminteenth the size of JPMorgan do all of these things with Excel. Hard to think the big bank was so rigorous when a formula uses =sum() instead of =average() halves a result and nobody notices.

All of which helps one better understand what’s behind the Agnes Rule: If banks sold anything but money, they’d go broke.