Monday, June 25, 2012

The Mathematica Evaluation of Charter Middle Schools

The most rigorous study yet on the average impact of charter schools came out in June 2010. I remember that study, which just about everybody else in the world seems to have forgotten, because for the past two years it has been the thing I pointed to when I said that I wanted to start blogging.

I now find myself in the unfortunate position of writing my first blog post on this study, having found this morning that the press reception to it at the time was pretty much crickets. The most extensive article on the evaluation was a brief piece from CSM.

Anyway, the authors of the study emphasized that the average effect of the charter schools was zero, but with significant heterogeneity: charter schools that served poor kids in urban areas were far more likely to improve their test scores, while charter schools that served wealthy suburban kids tended to have negative effects on their test scores. The pretty clear interpretation of this is that charter schools serving poor kids are better than charter schools serving rich kids.

The point I've been wanting to make for the last two years (aren't you just waiting with baited breath?) is that the study doesn't show that at all. Your faithful correspondent was one of the half-dozen people lucky enough to be sitting in a webinar in July 2010 to find that out.

A little methodological detail: the study picked 36 charter middle schools from around the country that were over-enrolled. To determine who got to go to the charter schools, they held lotteries: lottery winners got to go to the charter school they wanted to attend, lottery losers didn't. About 75% of the lottery losers ended up in traditional public schools. Two years after the lotteries, they researchers went back and checked the test scores of the students who won and lost the lotteries, and compared them. The average test scores of the kids who won the lottery minus the average test scores of the kids who lost the lottery = the impact of the charter school (or, more precisely, the impact of winning the lottery to go to the charter school).

They found that the difference between the test scores of lottery winners and lottery losers varied a lot across schools:

(This chart appears on page 10 of the executive summary (PDF) of the study results.) Each bar in the chart represents a single lottery; a positive value means that the students who won the lottery did better than the students who lost. An intuitive reading of this chart might tell you something like, "when it comes to math test scores, some charter schools are doing a great job, and some are doing a terrible job, and the average is about zero."

There's one additional chart from that webinar I attended in July 2010 that I think offers some more insight:
(Slides from the webinar here.) This chart is sorted the same way (i.e. from worst impact to best impact, left to right), but it shows the average score amongst the "control" group, the kids who lost the lottery in each year, rather than the difference between the lottery winners and lottery losers. This chart doesn't appear in the other published material about the study.

Using these charts, I made a spreadsheet of the "control group means" and impact for each school and used it to calculate the average test score for the kids who won the lottery in each group:
Again, these bars each represent a school, and as you move further right in the chart, the "impact" of the school (i.e. the difference between the test scores of the kids who won and lost the lottery) increases. Comparing this chart to the first one, which shows the impact, it's fairly clear that the difference between the lottery winners and lottery losers is not very highly correlated with the actual test scores of the kids who end up in the charter schools. It is much more highly correlated with the test scores of the lottery losers (i.e. the kids who end up in public schools).


A quick statistical test confirms the visual result: the correlation coefficient between the impact and control group average for each school (-.44) is about three times as large as the correlation coefficient between the impact and the treatment group average for each school (.16). (The difference in signs arises from the fact that the equation for the treatment effect is treatment group average score minus control group average score; calculation available in the spreadsheet.) What this means is that the key results for this study on the impact of charter schools are driven more by the quality of the public schools attended by the kids who lost the lottery than by the quality of the charter schools that the lottery winners actually attended.


The right bottom line here isn't that charters serving poor kids in urban districts are effective and that charters serving rich suburban kids are ineffective, but rather that public schools serving poor kids in urban districts are less effective and public schools serving wealthy kids in the suburbs are more effective. These results suggest that the bigger positive impact of charter schools in poor urban areas is driven by the bad scores of the kids who were relegated to public schools, rather than the particular excellence of the charter schools in those areas.


I was only able to reach this conclusion on the basis of a chart from a presentation two years ago, not included in any of the published materials resulting from this study. Even though it was funded by the federal Department of Education, as far as I know, the data and code the authors of the study used to reach their conclusions are not available to the public. (To be fair, I didn't email them to ask; they might have given it to me.) With open data, someone who wasn't in that webinar would have been able to reach this conclusion faster and more certainly (since the actual data from the study would allow far more robust tests of my conclusion than the one I've been able to do from the chart).


A final addendum: I normally love randomized control trials, but this is a case where I think they're less helpful. Part of the reason this study probably didn't get any press is that it has a fundamentally boring conclusion ("charter schools are exactly as effective as public schools!"), but there's also the issue that they were using schools that were already oversubscribed - hardly an average group of charter schools. Even if they had found a big average effect of the charter schools in their sample, I think I would have been inclined to write it off to the unrepresentative selection of schools, rather than concluding "charter schools are just better than traditional public schools." Similarly, the finding that charter schools in wealthy suburbs aren't increasing student test scores is interesting, but I feel like I didn't need an expensive RCT to tell me that: charter schools in districts that have high student achievement scores are unlikely to compete on test scores. Instead, I would expect them to compete by offering better art curriculum or a more flexible school day, things that are unlikely to translate into improved math test scores. So I'm not convinced that the charter schools serving the rich suburban kids were worse than the public schools, but just that they were offering a different educational service. Normally I think test scores are a useful measure, but when parents and students in districts that can offer a strong education are explicitly willing to trade test scores for something else, I find it hard to complain. This comports with the authors' finding that parent and student satisfaction is higher for students who win the lotteries, consistently across schools.

1 comment:

  1. Excellent analysis and discussion, as usual.

    I'm sure I don't need to ask, but have you seen "Waiting for 'Superman'"? I think it gives excellent insight into the educational problem the US is facing. Other factors to consider though are more qualitative in nature such as the individual environment, the type of teacher, and other opportunities presented to these students.

    ReplyDelete