« Story time | Main | Tip #2 for reading business trends: filter out the noise »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Chris P

"Visits to the classes and making observations do not substitute for factual evidence." I disagree with this and so does most everyone over the age of 12 months.

Here is an experiment: Look at your computer. Now close your eyes. By your statement above, your computer does not exist because you have no factual evidence of its existence.

If you went into a classroom and observed someone shooting another person, this observation is factual and would be admissible in a court of law.

Ethnographic research bases its research method on observation; sometimes even participation and observation together. The strength of the conclusions draw from the observation is what is limited, however the observations are still facts.


Chris: Yes but you are reading a statistics blog, and statisticians want to see data.

Your example is a caricature of what I'm saying. If you and I enter a room and find a dead man, shot, that is a fact. However, you and I may not agree that a particular teacher has been "engaging" her students. Nor can we agree that a teacher highly rated for "engagement" is necessarily a teacher who will have high value-added performance.

The hypothesis in question can easily be tested through a randomized experiment. Even if such an experiment is deemed too expensive, they could easily find 10 people to rate teachers on "engagement", and then match the average ratings back to the value-added data, to query the hypothesis that higher engagement is the "surest way" to get to high value-add.

By that, I do not disagree that there are situations in which data cannot be gathered and statistically tested as described above, and then other methods have to be used. I would caution against making super-confident statements on causality when such limited methods are used.


I agree with you in general: unfounded leaps from correlation to causation are a sign of bad reporting, and can be very misleading. But this actually strikes me as a very reasonable statement. Engagement was apparently observed by multiple reporters as the primary commonality between the highly effective teachers. Risk of confirmation bias? yes. But if that's what the reporters discovered in their investigation, that's valid information to share. Plus, "sign" is a correlative word: they found that the best way they could subjectively identify a quality teacher was by whether they engaged the classroom: That's correlation. Formal experiments are preferable to ad-hoc ones, but ad-hoc is a big jump from "story time".


Just realized this post is a month and a half old: Andrew just linked to it. Posting on old content seems...impolite some how, so apologies. :)

Mark Palko

I know I've been beating this drum quite a bit, but this is not just a case of rolling out out some data then jumping into a narrative; this is a case of rolling out a simplistic account of highly confounded data then jumping into a narrative.

How confounded?

"A study designed to test this question used VAM methods to assign effects to teachers after controlling for other factors, but applied the model backwards to see if credible results were obtained. Surprisingly, it found that students’ fifth grade teachers were good predictors of their fourth grade test scores. Inasmuch as a student’s later fifth grade teacher cannot possibly have influenced that student’s fourth grade performance, this curious result can only mean that VAM results are based on factors other than teachers’ actual effectiveness."
(from EPI)

Joseph and I have more on this at Observational Epidemiology but be warned, it's not pretty.


Paul: Thanks for the comment. I don't have a rule that you can't comment on old posts. I was loathe to get into details about the reporter's "observational study"/ethnographic research as it distracts from my larger point. But there are many problems with the study: (1) it is not a blinded study: you go into the classroom knowing which teacher scored high and who scored low, with the explicit goal of explaining the difference; it would be much more credible if the researcher does not know a priori how the teachers scored; (2) all the study can show is which teacher engages students and who doesn't; to claim that the study could prove a causal link between engagement and test scores is surely too much; (3) the usual statistical concerns of design and sample sizes.

For me, the theory that engagement leads to higher test scores doesn't even pass the common-sense test; it smells of an educator's ideal rather than cold-hard reality. Compare one teacher who teaches to the test, making students do "mock tests" every day of the year, and one who teaches understanding of the materials, and does not feel bound by the test curriculum: whose students will do better in the standardized tests? While we might not like the answer, it is still the answer.

Mark: I will put up a post eventually about the problems of VAM. Thanks for pointing that out. I encourage readers to look at Mark's posts on this topic. My silence does not constitute endorsement of their methodology.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)


Link to Principal Analytics Prep

See our curriculum, instructors. Apply.
Business analytics and data visualization expert. Author and Speaker. Founder of Principal Analytics Prep, MS Applied Analytics at Columbia. See my full bio.

Next Events

Sep: 23-4 NBA Hackathon (judge), NYC

Oct: 4 Data Visualization Seminar w/ JMP, NYC

Oct: 11 Principal Analytics Prep Info Session, online

Oct: 12 NYPL Analytics Careers Talk, NYC

Past Events

See here

Future Courses (New York)

Summer: Statistical Reasoning & Numbersense, Principal Analytics Prep (4 weeks)

Summer: Applied Analytics Frameworks & Methods, Columbia (6 weeks)

Junk Charts Blog

Link to junkcharts

Graphics design by Amanda Lee


  • only in Big Data