A dangerous equation
Nov 25, 2007
Graduation rates at 47 new small public high schools that have opened since 2002 are substantially higher than the citywide average, an indication that the Bloomberg administration’s decision to break up many large failing high schools has achieved some early success.
Most of the schools have made considerable advances over the low-performing large high schools they replaced. Eight schools out of the 47 small schools graduated more than 90 percent of their students.
This graphic included in the NYT article lent support to the "small schools movement". In particular, note the last sentence of the above quotation: it incorporates the oft-used device of subgroup support of a hypothesis, in this case, the subgroup of eight top-performing schools.
Such analysis is "dangerous", according to Howard Wainer, who discusses this and other examples of misapplication in a recent article in American Scientist, entitled "The Most Dangerous Equation". He alleged that billions have been wasted in the pursuit of small schools.
The issue concerns sample size. Dr. Wainer and associates analyzed math scores from Pennsylvania public schools. Average scores for smaller schools are based on smaller number of students, and therefore less stable (more variable). More variability means more extremes. Thus, by chance alone, we expect to find more smaller schools among the top performers. Similarly, by chance alone, we also expect to find more smaller schools among the worst performers.
The scatter plot lays out their argument. Focusing only on the top performers (blue dots), one might conclude that smaller schools do better. However, when the bottom performers (green) are also considered, the story no longer holds. Indeed, the regression line is essentially flat, indicating that scores are not correlated with school size.
This is all nicely explained via the standard error formula (De Moivre's equation) in Dr. Wainer's article. Here is a NYT article from the mid 1990s describing this same phenomenon.
File this as another comparability problem. Because estimates based on smaller samples are less reliable, one must take extra care when comparing small samples to large samples.
Dr. Wainer is publishing a new book next year, called
"The Second Watch: navigating the uncertain world". I'm eagerly looking forward to it. His previous books, such as Graphic Discovery and Visual Revelations
, both part of the Junk
Charts collection.
Sources: "The Most Dangerous Equation", American Scientist, November 2007; "Small Schools Are Ahead in Graduation", New York Times, June 30 2007.
P.S. Referring back to the NYT chart above, one might wonder at the impossible feat of raising graduation rates across the board simply by breaking up large schools into smaller ones. This topic was taken up here, here and here. When evaluating the "small schools" policy, it is a mistake to discuss only the performance of small schools; any responsible analysis must look at improvement over all schools. Otherwise, it's a simple matter of letting small schools skim off the cream from larger schools.
Interesting, thanks for the reminder.
However, I think in this case, there is an important point that you have overlooked. What they did here was to split 12 large schools into 47 smaller schools. This means that the underlying 'base' of students (or the pool from which they are drawn) has remained the same for each comparative group. BTW, one has to assume that the average percentage of graduation was calculated on the basis of individuals (not taking averages of several schools) and thus, the population size per school was probably the same.
One of the things that I am missing in the analysis is the expenditure per student prior and after the split, as this might be substantially different. Provided the OPEX per student is similar, than this 'experiment' would actually proof that small schools are much more effective than larger ones at teaching students.
Posted by: Jens | Nov 26, 2007 at 07:12 AM
It's a nice demonstration of the principle, but Dr Wainer spoils his point a bit by going on to say:
"the regression line shows a significant positive slope; overall, students at bigger schools do better. This too is not unexpected, since very small high schools cannot provide as broad a curriculum or as many highly specialized teachers as can large schools."
He has not presented the evidence that that's the reason for the positive slope, only told a just-so story about why it might be the reason. The small-schools movement have their own just-so stories that "explain" why their ideas are better, like smaller schools allowing teachers to know their students better. It doesn't make it true. Dr Wainer should have just let the graph speak for itself, instead of trying to tell a story about the graph that it wasn't equipped to confirm.
Alternatively, if he wanted to make the graph confirm the story, he ought to have graphed "breadth of curriculum" or "number of highly specialized teachers" against school size and PSSA score. The extra data would have been welcome context, but words alone are not; which I think I will take as a valuable check on my own tendency to lard my graphs about with words "adding context".
Posted by: derek | Nov 26, 2007 at 09:08 AM
Andrew Gelman has a nice example of this. He shows a map of kidney cancer deaths by U.S counties. Shade the counties with the highest death rates, and sparsely populated Midwestern counties stand out. He asks his students to speculate as to why this might be. Lack of access to health care? Polluted groundwater?
Then he shows another map on which counties with the lowest death rates are shaded. Once again, sparsely populated Midwestern counties stand out again. See Section 3 of this paper:
http://www.stat.columbia.edu/~gelman/research/published/bayesdemos.pdf
This example is also included in his book "Bayesian Data Analysis".
Posted by: John S. | Nov 26, 2007 at 11:20 PM