« We sometimes need a muzzle | Main | The inevitable perversion of measurement »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Brett Keller

I agree overall with the post -- all very important points! But one small note: I think the end part of the quote from Taubes is misleading. He says that the causal effects were disproven by experimental studies, but my understanding is the experimental studies were testing something slightly different (ie, whether a diet - with all the attendant compliance problems and measurement issues - could reduce that elevated risk). The conclusion that the observational data are sketchy (and nutritional epidemiologists should be more cautious in interpreting causality from this observational data) is true, but Taubes starts with that valid criticism and ends up in Atkins-it's-all-carbs-that-are-bad-for-you-schtick land.

That doesn't change the point of this post - that we should be cautious re: small effect sizes - which is spot on.


Brett: Thanks for the comment. I plan on reading Taubes's book at some point and can then confirm whether he's being fair. In general, though, I'm not surprised at all that experiments fail to validate tiny effects derived from observational studies.


One thing people don't seem to have noticed about the "red meat" study was that sex was not a variable in the regression equation. Nor were the results reported separately for males and females.

So if males have a higher mortality rate (and they do, at almost all ages, certainly those within the study) and also eat, on average, more servings of red meat (which seems almost certain, given that they eat, on average, more of everything)...

Brett Keller

Morgan -- I think that's incorrect (just glancing at the original paper at http://archinte.ama-assn.org/cgi/content/full/archinternmed.2011.2287 -- let me know if I missed something in my rush). They have data from two sources (the Nurses' Health Study and the Health Professionals Follow-up Study), with one source being all women and the other source being all men. They report results in the form of hazard ratios for each study (and find the results are significant within each data set / gender) and then also a pooled analysis. That's Table 4 in the paper. If the effect was in the pooled analysis and not in the separate ones you'd be right to be skeptical, but in this design I don't think it's possible to separate out differences between the studies and differences between genders because there's perfect correlation between the two.


I enjoyed reading this article, it's a very good one from you. Very interesting approach and clearly expressed thoughts.
Thank you!


A couple of other QWERTY points: (1) even if it were a large effect, is it left-right bias of the typist OR of the original designer of the keyboard, who may have a variety of idiosyncratic reasons for the design. (2) variance across letters within left and right would likely to higher than the variance between left and right.

The comments to this entry are closed.

Get new posts by email:
Kaiser Fung. Business analytics and data visualization expert. Author and Speaker.
Visit my website. Follow my Twitter. See my articles at Daily Beast, 538, HBR, Wired.

See my Youtube and Flickr.


  • only in Big Data
Numbers Rule Your World:
Amazon - Barnes&Noble

Amazon - Barnes&Noble

Junk Charts Blog

Link to junkcharts

Graphics design by Amanda Lee

Next Events

Jan: 10 NYPL Data Science Careers Talk, New York, NY

Past Events

Aug: 15 NYPL Analytics Resume Review Workshop, New York, NY

Apr: 2 Data Visualization Seminar, Pasadena, CA

Mar: 30 ASA DataFest, New York, NY

See more here

Principal Analytics Prep

Link to Principal Analytics Prep