OK, I am kind of annoyed. I've been going through John Hattie's impressive book, Visible Learning, trying to make sense of it, and I'm confused on a basic point. Here I am reading this long book that is all about comparing effect sizes, and I realize that I don't know exactly what Hattie means by "effect size." If a program has an "effect size" of 0, does that mean that the the kids in the program didn't learn anything, or does it mean that they didn't learn anything more than the kids in the control group? At different times in the book, it seems to mean different things. It is hugely annoying that Hattie doesn't make this clearer. He claims to have spent 15 years writing the book, and he can't even be clear about the most basic possible element of his work? Grrr.
Of course, part of what makes me so annoyed is that I am aware that the problem is probably partly mine as well as Hattie's. If I were smarter, maybe I'd be able to figure it out. But I can't help suspecting that maybe "effect size" means different things for different studies, and that this may have some effect on Hattie's conclusions.
Update (12/16): OK, it's not my problem, it's Hattie's. More later, but this is just ridiculous. The state of educational research is just pathetic. This guy is some kind of big shot, and his book is supposedly a masterwork fifteen years in the making, and he's making various basic errors. I didn't catch them at first because I'm an English teacher, not a statistician, but it didn't take me THAT long, either. How is it that people continue to take him seriously? I'll explain in a bit more detail in another post when I have a free half hour or so.
Think about effect sizes as as a common yardstick to compare achievement between different tests. An analogy might help: before rulers, people used body parts to measure things. But body parts weren't standardized, so rulers were better able to compare different things. The same is true of effect sizes: they help compare percentages and averages on different exams in a standardized way, since different scores mean different things on different exams.
ReplyDeleteAs for your questions about an effect size of 0, you are right. An effect size of 0 means that the experimental group didn't learn more than the control group and that neither group learned anything. It can mean both things.
A more accessible version of the book, Visible Learning for Teachers, is also available. Another book, International Guide to Student Achievement, will be released on the 21st of this month (also by Hattie).
http://tinyurl.com/d9cyenr
Good luck!
I think I basically understand effect sizes, and I'd say they're pretty different from yardsticks. As for Hattie, the last thing I want is a "more accessible" version of the book. The whole point of slogging through the stuff in detail is partly so that I can try to see whether I think Hattie is trustworthy. So far, I am less impressed as I look at it more closely. In the chapter of his book in which he discusses effect size, he seems to make a pretty basic statistical error (I lent my copy book out to be vetted by a professional statistician, but he says, I think, that if one treatment is one standard deviation more effective than another one, then the probablity is .84 that a subject chosen at random from the first group would do better than a subject chosen at random from the other group, and this is wrong). Also, the cute little visual he uses to show the effect size for every category of program or treatment or whatever is really problematic for me in that it implies that an effect size of 0 means that the treatment or program is actually WORSE than just letting kids develop naturally. This is just not true. Probably Hattie would say that he doesn't mean to imply it, and that only unsubtle bozos like me would think so, but the graph pretty plainly DOES imply it, and I think that's misleading. I also have some issues with the way he handles the whole language vs. phonics debate, and I have the issues I mentioned in my first post about his book (short-term vs. long-term issues, etc.). I actually think my own teaching could benefit by being more "visible" in some of the ways Hattie outlines, but I didn't need him to tell me that. I don't go to Mr. Meta-analysis mainly for teaching advice; I go to him mainly for data.
DeleteAnyway, thanks for the comment--but why anonymous?
Apologies if I was not clear. Effect-size of 0 means that there was a) no difference between an intervention and a control (non-intervention) group or b) no change in outcomes over time (post-pre). I do note that this may not be the 'best' reference point as children develop regardless of schools. This was tougher to estimate, but (as described in Ch 2) I estimated it to be about .15. I did provide many references to more detail about effect-size and happy to send more. John Hattie
ReplyDeleteThanks for replying. I have grave qualms about the way you handle CLE and this ambiguity about effect size in your book. I outline these qualms in a new post:
Deletehttp://literacyinleafstrewn.blogspot.com/2012/12/can-we-trust-educational-research_20.html
Sorry to be so critical. I really did approach your book with an open mind, but as I said, I go to Mr. Meta-analysis for data, and if those aren't presented clearly and competently, it's a bit of a disappointment. Good luck with your rappelling.
Nice blog you havve
ReplyDelete