Popularity contests and charts
May 25, 2010
Nick Cox commented on Aleks's post on Andrew's blog on whether chartjunk is better than Tuftian-style charts. Nick made the point that charts should be subjected to popularity contests (my paraphrase). He seemed to say all of statistical science, perhaps all of science, should be fair game for studies of popular opinion.
Interestingly, Aleks's post was about a popularity contest of a different kind: he alerted us to a research paper using Amazon's Mechanical Turk to assess different types of charts. I haven't had time to review this paper yet but it sounds more promising than the 20-sample experiment. When I read that paper, I will hold my doubts about "crowdsourcing", "prediction markets", and such like.
Back to Nick's comment. But a lot of science have already been turned into popularity contests in recent years - evolution, climate change, causes of autism, bird flu vaccines, etc. have all become politicized, with scientists kicked to the curbside. Based on these results, I wouldn't recommend it.
Nick also pointed out that the "chartjunk is better" paper won the best paper award at the conference where the researchers presented it. I think it said a lot about the lack of statistical expertise on the judging panel more than anything. I stated in my original review that this sort of research is useful and necessary, but it ought to be done with higher standards.
My prior posts on the chartjunk paper:
8 Red Flags ...
The notion that science shoudl be subject to popularity contests is absurb. Science is essentialyl the art of making falsiable predictions based on availabel evidence. Public opnion doesn't chnage the evidence, nor whether or not the prediction is correct.
Posted by: Tom West | May 25, 2010 at 09:42 AM
Off-topic, but BBC News seems to like your bump charts: http://news.bbc.co.uk/2/hi/health/8696690.stm
Posted by: Tom West | May 25, 2010 at 11:10 AM
If you are worried about the level of statistical expertise in Infovis and SIG-CHI papers, sign up to be a reviewer. There are a lot of papers and few reviewers with statistics expertise.
Posted by: Hadley Wickham | May 25, 2010 at 12:15 PM
Briefly skimming the paper, I think the results have nothing to do with chart-junk or Tufte-style charts. The authors were evaluating the viability of using crowdsourcing (Mechanical Turk) as a replacement for lab studies. To validate the use of crowdsourcing, they reproduced some previous lab studies and compared the results. There is really nothing new to be learned from this work about chart style, but there are some interesting insights on the use of Mechanical Turk for this sort of research.
I think that the author's analyses of the experiments are rather off-topic. The authors spend much more page space discussing the results of individual experiments than on comparing the correlation between their Mechanical Turk experiments and the past published work.
Posted by: Tom Hopper | May 26, 2010 at 05:55 AM
I've just seen this quotation of my comment on Andrew Gelman's blog. You do admit that you paraphrase, but that paraphrase turned my stance around almost completely.
Don't statistically minded people ever resort to irony? What kinds of graphics people prefer is worth attention, but I don't advocate a popularity contest to decide what's best.
I agree broadly with Tom West's first comment. In fact, I've published marginally within climate science, so the irony is on me too.
I also underline Hadley Wickham's signal that there seems a big gap between best statistical practice and much information visualization work. That's why I found it dismaying that the paper in question was apparently lauded in its own community.
Posted by: Nick Cox | Jun 01, 2010 at 04:43 AM
Nick: I'm sorry to have missed the irony, and I'm glad we agree!
Posted by: Kaiser | Jun 03, 2010 at 12:42 AM