Light entertainment: cougars alert
Reading this before the long weekend may save your life

Popularity contests and charts

Nick Cox commented on Aleks's post on Andrew's blog on whether chartjunk is better than Tuftian-style charts. Nick made the point that charts should be subjected to popularity contests (my paraphrase). He seemed to say all of statistical science, perhaps all of science, should be fair game for studies of popular opinion.

Interestingly, Aleks's post was about a popularity contest of a different kind: he alerted us to a research paper using Amazon's Mechanical Turk to assess different types of charts.  I haven't had time to review this paper yet but it sounds more promising than the 20-sample experiment. When I read that paper, I will hold my doubts about "crowdsourcing", "prediction markets", and such like.

Back to Nick's comment. But a lot of science have already been turned into popularity contests in recent years - evolution, climate change, causes of autism, bird flu vaccines, etc. have all become politicized, with scientists kicked to the curbside. Based on these results, I wouldn't recommend it.

Nick also pointed out that the "chartjunk is better" paper won the best paper award at the conference where the researchers presented it. I think it said a lot about the lack of statistical expertise on the judging panel more than anything. I stated in my original review that this sort of research is useful and necessary, but it ought to be done with higher standards.

My prior posts on the chartjunk paper:

8 Red Flags ...

More Questions than Participants

A Significant Mystery

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Tom West

The notion that science shoudl be subject to popularity contests is absurb. Science is essentialyl the art of making falsiable predictions based on availabel evidence. Public opnion doesn't chnage the evidence, nor whether or not the prediction is correct.

Tom West

Off-topic, but BBC News seems to like your bump charts: http://news.bbc.co.uk/2/hi/health/8696690.stm

Hadley Wickham

If you are worried about the level of statistical expertise in Infovis and SIG-CHI papers, sign up to be a reviewer. There are a lot of papers and few reviewers with statistics expertise.

Tom Hopper

Briefly skimming the paper, I think the results have nothing to do with chart-junk or Tufte-style charts. The authors were evaluating the viability of using crowdsourcing (Mechanical Turk) as a replacement for lab studies. To validate the use of crowdsourcing, they reproduced some previous lab studies and compared the results. There is really nothing new to be learned from this work about chart style, but there are some interesting insights on the use of Mechanical Turk for this sort of research.

I think that the author's analyses of the experiments are rather off-topic. The authors spend much more page space discussing the results of individual experiments than on comparing the correlation between their Mechanical Turk experiments and the past published work.

Nick Cox

I've just seen this quotation of my comment on Andrew Gelman's blog. You do admit that you paraphrase, but that paraphrase turned my stance around almost completely.

Don't statistically minded people ever resort to irony? What kinds of graphics people prefer is worth attention, but I don't advocate a popularity contest to decide what's best.

I agree broadly with Tom West's first comment. In fact, I've published marginally within climate science, so the irony is on me too.

I also underline Hadley Wickham's signal that there seems a big gap between best statistical practice and much information visualization work. That's why I found it dismaying that the paper in question was apparently lauded in its own community.

Kaiser

Nick: I'm sorry to have missed the irony, and I'm glad we agree!

The comments to this entry are closed.