Thursday, April 18, 2013

Economics research is better than education research, after all!

It's been interesting to follow the responses to the revelation that Reinhardt and Rogoff's influential 2010 paper about debt and growth was fundamentally flawed (their paper implied, and their own public advocacy made more explicit, that once a country's debt/GDP ratio got above 90%, economic growth fell off sharply, but it turns out that there is no such falling off).

I've already written about the first thing that interested me, the similarity between the shoddiness of economics research and the shoddiness of education research.  Neither field is particularly scientific, and both are subject to what looks to me like heavy cultural and political bias or influence. One problem is that it's difficult in both fields, and virtually impossible in macroeconomics, to do controlled experiments.  A related problem is that it's often hard to tell what's cause and what's effect. And both education and economics are central to culture and politics, so both are more subject to cultural and political pressures than, say, chemistry.

Another remarkable phenomenon has been the chutzpah of Reinhardt and Rogoff. In their response yesterday, they wrote that they didn't believe that their mistakes affected in any significant way the "central message" of the 2010 paper or their subsequent work. That is an eye-poppingly nervy assertion.  You would never know from their response that the supposed 90% tipping point had become central to the public debate, nor that they themselves, in testimony to Congress and in prominent opinion articles, like this one, had claimed that the 90% line was "an important marker," with "little association" between debt and growth below 90%. Nor, from their response, would you know that they had continually, in their public statements, implied that it was the debt that was causing the slow growth, rather than the other way around.

The other interesting thing about the R&R debate this week has been the amount of attention, discussion, and thoughtful analysis their work has prompted.  And it is here that the economics research community has shown itself as vastly superior to the education research community.  Thoughtful and fundamental questions had been raised about the Reinhardt/Rogoff paper from the very beginning, with people like Dean Baker and Paul Krugman and many others suggesting that their 90% cut-off was bogus, that the causality was likely reversed (with the slow growth causing the debt, rather than vice versa), and that R&R should release their data set so that other researchers could analyze it.  Then this week, once the data was finally made available to the public, other scholars did immediately start to analyze it, with one of them finding that the causality did indeed seem to run the other way, since high debt was correlated more with low growth in prior years than with low growth in succeeding years, and with others noting that the negative relationship between debt and growth was more significant at the low levels of debt (<30%) that R&R had claimed showed "little association" than at the >90% levels that R&R had been telling everyone were so dangerous.

The level of debate, just over the past couple of days, has been impressive, and puts education debates to shame.  For instance, when I looked into vocabulary research last summer, what I found was virtually no scientific debate at all, and an apparently general innumeracy that supported Paul Krugman's contention that advanced mathematics is usually not important; what you need instead is just a level of comfort with numbers and a sense of how they relate to the real world.  The same was true when I looked closely at the most prominent statistical study of education research, John Hattie's meta-analysis of education studies; there were significant problems with his analysis that seemed to have been publicly noted, in the years since publication, only by a guy in Norway and by me, a high school English teacher.

This is not to say that education research is never thoughtfully debated.  When the Chetty, Friedman and Rockoff paper about the long-term effects of "high-VA" teachers came out, it was very carefully responded to by Bruce Baker, among others.  Overall, however, education research strikes me as basically a backwater, and especially those areas of research that have to do with actual pedagogy.  Perhaps this is partly because pedagogy, despite its central importance, seems more obscure to non-teachers than less important but larger-scale issues like school funding and class sizes.  These large-scale issues seem more like economics problems, and so tend to be studied, not by Ed. School professors, but by economists.  Some of the best work on class sizes was done by Alan Krueger, the chairman of Obama's Council of Economic Advisors, and the study on the long-term effects of "high-VA" teachers was done by Raj Chetty, who just won the Clark medal.

So, how can Education research get better?  I'm not sure.  I guess I hope people become more aware that it's lousy and that when scholarship is brought into public debate it is almost inevitably turned into propaganda, even by the scholars themselves. In particular I hope it becomes clear that some of the great received ideas of the ed. world are actually urban legends (vocabulary increases comprehension; schools can overcome poverty; etc.)--but that's another post.


  1. What has always bothered me about inservice days and teacher education is that I never went to one where we just sat around and discussed what the current research was on a topic. '

    For example, for the three day summer inservice on cooperative learning, the Harvard professor who came just kept having us do little cooperative learning exercises. I wanted some thoughtful discussion about the use of it, but no. Just dumb little drills to go through. I finally walked out.

    But, after reading your comments on ed. research, my guess now is the professor had not ever even considered times when it might or might not be useful. A really depressing thought.

    Thanks. A really good column.

    1. Yes--those professional development days can be frustrating. I think the reason is that the ed. research is fairly thin and inconclusive, so that even though the professor probably has thought about the usefulness of cooperative learning, the data is not very conclusive on when or how or how much would be best. In fact, it's almost more honest for them NOT to pretend that what they say is "data-driven", since it is rarely good data that's driving anything, but rather the preconceived ideas that are driving the data, as seems to have happened with this economics paper on debt and growth.

      Of course, there is SOME research that strikes me as useful and interesting (like the paper I discussed in "Research Shows that "research-based" pedagogy doesn't work", or the interesting paper I talked about in "What Seems Natural, and To Whom, and Why?")--but you have to always be reading it with a critical eye...

      Thanks for your comment!