School testing: An actuarial analysis, Part 1

A lot of the debate over the quality of U.S. schools turns on the Nation’s Report Card. This semi-regular report has been analyzing test scores among 9-, 13- and 17-year-olds for more than 30 years, and one of its most troubling findings is the achievement gap between black students and white students.

I’ve taken the underlying data and analyzed it the way most actuaries would, and I conclude:

  • The gap between white and black students has been declining and will continue to decline.
  • Whatever gap exists is in place by the age of 9 and stubbornly resists change thereafter.

First, the problem:

The standard analysis

This graph, taken from the Nation’s Report Card, shows the difference between black 9-year-olds and white 9-year-olds on a standardized reading test. (Click on the chart to see a larger version.) The years along the bottom are the years the test was administered. I emphasize that because the horizontal axis is going to become very important in my analysis.

The shaded area shows the gap between black students and white students. It declined between 1971 and 1980, persisted at about a 35 point gap between 1980 and 2004 (a generation!), then began shrinking again in 2004. Good news! The gap is shrinking.

Not so fast. Here are results at age 17:

The improvement disappears. Or does it?

As with the 9-year-olds, the gap closes rapidly between 1971 and 1984, then persists. Unlike the 9-year-olds, it doesn’t get any better after the 2004 and 2008 testing.

The conclusion: Whatever improvements take place early on disappear as the kids get older. All the efforts expended early – Head Start, pre-K – seem like a waste, if what they accomplish washes away as those kids progress through the system.

Math scores behave the same way, incidentally.

Blogger Kevin Drum typifies the disappointment:

The gains among 9-year-olds are genuinely extraordinary. … But half the improvement washed out in just the next four years. And in the four years after that the rest of it washed out.

But that’s not what is happening. If you look at the data a different way, the way most actuaries would, you reach a much more hopeful conclusion.

First, look back at the two tables. In 2008, the gap among 9-year-old shrank, but it did not shrink among 17-year-olds. However, that’s not evidence that the progress made among 9-year-olds disappears by age 17, because the 9-year-olds who took the 2008 test aren’t 17 yet.

Think of it this way: Were some innovation put in today that raised test scores among 9-year-olds, it wouldn’t affect today’s 17-year-olds. The scores of 17-year-olds wouldn’t change for another eight years, time for today’s 9-year-olds to turn 17. So a narrowing of the gap among 9-year-olds that began in 2004 wouldn’t start to show up among 17-year-olds until 2012.

Now Kevin the blogger is a smart guy and he senses the phenomenon:

School reforms generally start at the elementary level and work their way up. So maybe we just haven’t had time for the reforms of the past decade to show up among 17-year-olds. …

But I’d be lying if I didn’t say that I’m skeptical. It’s true that there’s some evidence in the data that the gains of the 70s and 80s were staggered: age 9 first, then age 13, then age 17. But only partly. Generally speaking, all three cohorts progressed at the same time. This time around we aren’t seeing it.

Essentially, Kevin is saying, it’s hard to read the data as it is presented. He recognizes that there could be a lag in improving test scores but also says in the past the gap closed at all three age groups at the same time, so it should happen that way again.

Enter the actuary. To see whether improvement among 9-year-olds persists, we should analyze the data by cohort – looking at how 9-year-olds perform, then seeing how those 9-year-olds do eight years later.

This is what actuaries do. Life actuaries look at a cohort of lives – people born at certain ages – and project how long a person born in, say, 1963, will live.

Casualty actuaries look at cohorts of claims. They look at estimates for a group of claims that occurred in, say, 2002 and project how much a company will ultimately pay for those claims.

That’s what we’ll do in the rest of this series: make some simple assumptions to create cohorts of kids and follow those kids as they progress.

The next part of this series will describe the assumptions I make to create the cohorts. The final part will show how the cohorts perform and what conclusions we can draw.

Advertisements
Tagged

One thought on “School testing: An actuarial analysis, Part 1

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: