Chris P. sent me to this set of charts / infographics with the subject line "all sorts of colors and graphs." I let the email languish in my inbox, and I now regret it. For two reasons: one, the topic of how scientists can communicate better with, and thus exert stronger influence on, the public is very close to my heart (as you can tell from my blogs and book), and this article presents results from a poll on this topic done on on-line readers of Scientific American, and Nature magazines; two, some of the charts are frankly quite embarrassing, to have appeared in venerable publications of a scientific nature (sigh); three, these charts provide a convenient platform to review some of the main themes on Junk Charts over the years.
Since the post is so long, I have split it into two parts. In part 1, I explore one chart in detail. In part 2, I use several other charts to illustrate some concepts that have been frequently deployed on Junk Charts.
***
Exhibit A is this chart:
First, take a look at the top left corner. At first glance, I took the inset to mean: among scientists, how much do they trust scientists (i.e., their peers) on various topics? That seemed curious, as that wouldn't be a question I'd have thought to ask, certainly not as the second question in the poll.
On further inspection, that is a misreading of this chart. The "scientists" represented above are objects, not subjects, in the first question. As the caption tells us, the respondents rated scientists at 3.98 overall, which is an average rating across many topics. The bar chart below tells us how the respondents rated scientists on individual topics, thus providing us information on the spread of ratings.
Unfortunately, this chart raises more questions than it answers. For one, you're working out how the average could be 3.98 (at that 4.0 white line) when all but three of the topic ratings were below 3.98. Did they use a weighted average but did not let on?
Oops, I misread the chart, again. I think, what I stumbled on here is the design of the poll itself. The overall rating is probably a separate question, and not at all related to the individual topic ratings. In theory, each person can assign a subjective importance as well as a rating to each topic; the average of the ratings weighted by their respective importance would form his or her overall rating of scientists. That would impose consistency to the two levels of ratings. In practice, that makes an assumption that the topics span the space of what topics each person considers when rating the scientists overall.
***
The bar chart has a major problem... it does not start at zero. Since the bars are half as long as the longest, you might think the level of trust associated with nuclear power or climate change would be around 2 (negative). But it's not; it's in the 3.6 range. This is a lack of self-sufficiency. The reader cannot understand the chart without fishing out the data.
Now, ask this question: in a poll in which respondents are asked to rate things on a scale of 1, 2, 3, 4, 5, do you care about the average rating to 2 decimal places? The designer of the graphic seems to think not, as the rating was rounded up to the nearest 0.5, and presented using the iconic 5-star motive. I think this is a great decision!
But then, the designer fell for loss aversion: having converted the decimals to half-stars, he should have dropped the decimals; instead, he tucked them at the bottom of each picture. This is no mere trivia. Now, the reader is forced to process two different scales showing the same information. Instead of achieving simplification by adopting the star system, now the reader is examining the cracks: is the trust given citizens groups the same as journalists (both 2.5 stars) or do "people" trust citizens groups more (higher decimal rating)?
***
The biggest issues with this chart concern the identification of the key questions and how to collect data to address those questions. This is the top corner of the Trifecta checkup.
1) The writer keeps telling us "people" trust this and that but the poll only covered on-line readers of Scientific American and Nature magazines. One simply cannot generalize that segment of the population to the common "people".
2) Insufficient attention has been paid to selecting the right wording in the questions. For example, in Exhibit A, while the overall trust question was phrased as trusting the "accuracy" of the information provided by scientists vs. other groups, the trust questions on individual topics mentioned only a generic "trust". Unless one thinks "trust" is a synonym of "accuracy", the differential choice of words makes these two set of responses hard to compare. And compairing them is precisely what they chose to do.
***
In part 2, I examine several other charts, taking stops at several concepts we use on Junk Charts a lot.