My friend Alberto Cairo said it best: if you see bullshit, say "bullshit!"
He was very incensed by this egregious "infographic": (link to his post)
Emily Schuch provided a re-visualization:
The new version provides a much richer story of how Planned Parenthood has shifted priorities over the last few years.
It also exposed what the AUL (American United for Life) organization distorted the story.
The designer extracted only two of the lines, thus readers do not see that the category of services that has really replaced the loss of cancer screening was STI/STD testing and treatment. This is a bit ironic given the other story that has circulated this week - the big jump in STD among Americans (link).
Then, the designer placed the two lines on dual axes, which is a dead giveaway that something awful lies beneath.
Further, this designer dumped the data from intervening years, and drew a straight line from the first to the last year. The straight arrow misleads by pretending that there has been a linear trend, and that it would go on forever.
But the masterstroke is in the treatment of the axes. Let's look at the axes, one at a time:
The horizontal axis: Let me recap. The designer dumped all but the starting and ending years, and drew a straight line between the endpoints. While the data are no longer there, the axis labels are retained. So, our attention is drawn to an area of the chart that is void of data.
The vertical axes: Let me recap. The designer has two series of data with the same units (number of people served) and decided to plot each series on a different scale with dual axes. But readers are not supposed to notice the scales, so they do not show up on the chart.
To summarize, where there are no data, we have a set of functionless labels; where labels are needed to differentiate the scales, we have no axes.
This is a tried-and-true tactic employed by propagandists. The egregious chart brings back some bad memories.
It's called the MLB pipeline. The text at the top helpfully tells us what the chart is about: how the playoff teams in baseball are built. That's the good part.
It then took me half a day to understand what is going on below. There are four ways for a player to be on a team: homegrown, trades and free agents, wherein homegrown includes drafted players or international players.
Each row is a type of player. You can look up which teams have exactly X players of a specific type. It gets harder if you want to know how many players team Y has of a given type. It is even harder if you don't know the logos of every team (e.g. Toronto Blue Jays).
Some fishy business is going on with the threesomes and foursomes. Here is the red threesome:
Didn't know baseball employs half a player. The green section has a different way to play threesomes:
The blue section takes inspiration from both and shows us a foursome:
I was stuck literally in the middle for quite a while:
Eventually, I realized that this is a summary of the first two sections on the page. I still don't understand why there is no gap between 11 and 14 but then the 14 and 15 arrows are twice as large as 9, 10 and 11 even though every arrow contains exactly one team.
The biggest problem in the above chart is the hidden base: each team's roster has a total of 25 players.
Here is a different view of the data:
With this chart, I want to emphasize two points: first, addressing the most interesting question of which team(s) emphasize which particular player acquisition tactic; second, providing the proper reference level to interpret the data.
Regarding the vertical, reference lines: take the top left chart about players arriving through trade. If every team equally emphasizes this tactic, then each team should have the same number of traded players on the 25-person roster. This would mean every team has approximately 11 traded players. This is clearly not the case. Several teams, especially Cubs and Blue Jays, utilized trades more often than teams like Mets and Royals.
Reader Aaron K. submitted an infographic advertising the upcoming New England Auto Show to be held in Boston (link).
As Aaron pointed out, there is plenty of elementary errors contained in one page. I don't think the designer did these things consciously. I believe in having someone else glance at your work before you publish it. Or take a walk around the house and look at your own work after flushing your head.
In the following diagram, the graphical elements (stick figures) are coding the data labels, rather than the data!
Helping readers figure out which one is male and which one is female seems, hmm, unnecessary.
Placing the above two charts side by side has the effect of suggesting that only male attendees were asked about their age.
Look again, is the proportion of attendees over 18 4%, 96% or 100%?
This map irritates me.
Is it because they could have enlarged the frame just a little so as not to have to expel little Rhode Island from New England? Is it because not having the right frame size caused two numbers to sit outside New England when only one should? Is it because having two numbers outside the boundary tempted the designer to single out Rhode Island for the purpose of labeling? Is it because no other state is labeled besides Rhode Island?
Or is it because the land area is vastly disproportional to the data being displayed? Is it because the map construct is a geography lesson and nothing more (something I wrote about years ago)? Is it because the geography lesson is incomplete since only one state is labeled?
According to the text at the bottom, this part of the country is proud of "it's (sic) academia" and has hundreds of thousands of college students, who somehow "contribute $4.8 billion+ to the city's economy," which tells me they are super-productive in the classrooms.
I have yet to understand why the vertical axis of the top chart keeps changing scales over time. The white dot labelled "Peak 1982" (70 million) is barely above the other white dot for "2007" (38 million). This chart hides a clear trend: the population of sheep in New Zealand has plunged by 45% over 25 years.
To address the question of sheep versus human, one should plot the ratio of sheep-to-human directly. In this case, the designer probably faced a problem: because of the plunging population of sheep, the ratio has plunged steeply in 25 years. To make a point that "people are outnumbered more than 9 to 1", the designer didn't want to show a plunging trend. (Could this be the reason why the human population in 1982 was not printed?)
This is a case of too many details. Instead of manipulating the scale to distort the data, one can simply show the current ratio, or the average ratio in the last five years.
As the reader scans to the bottom set of charts, a cognitive wedge is encountered, as the curved scale of the New Zealand chart gave way to the normal uniform scale. These smaller charts are no less confusing, however.
The two lines on these two charts appear almost the same and yet, the Australian chart (on the left) shows a ratio of 4 to 1 while the Icelandic chart (on the right) shows a ratio of 1.5 times. Makes you wonder if each one of the small-multiples have a dual axis.
Again, I'm not convivned that the time series adds anything to the message.
Reader Aaron W. came across this "Facts and Figures" infographic about Boise State University that seemingly is aimed at alumni of the school. Given that Boise State has a good reputation for analytics, Aaron found it disconcerting to see such a low-quality data graphic. (click on the image to see it in full size).
There are numerous little things to grumble about in each section of the chart. The larger issue though is the overall composition. When assembling a chart like this, it is important to provide a navigation path for readers, whether explicitly or through cues.
It's difficult to discern the organizing principles of this chart. Aaron felt this way: "the total information flow is haphazard, if not entirely incoherent. There is some valuable information here, but at best it gets lost in the shuffle."
For example, some statistics are for undergraduate students only, some are for graduate students, and some are offered in aggregate.
Confusion reigns. We learn that the school has total enrollment of 22K students but it's a little math quiz to learn how many are undergraduates. In certain sections, data about faculty members are mixed with those about students.
Not breaking out undergraduates from graduates is a particular problem when presenting demographics, such as age distributions, ethnicity, etc.
It's odd to present this distribution of age without remarking that the undergrads are shown on the left and the graduate students are shown on the right.
Then, the sections presenting counts of students, faculty, degrees, etc. overlap with sections presenting financial data.
A rethinking of this page should start with identifying the key questions readers would be interested in learning, and then organizing the data to suit those needs.
What makes this work is that the picture of the running back serves a purpose here, in organizing the data. Contrast this to the airplane from Consumer Reports (link), which did a poor job of providing structure. An alternative of using a bar chart is clearly inferior and much less engaging.
I went ahead and experimented with it:
I fixed the self-sufficiency issue, always present when using bubble charts. In this case, I don't think it matters whether the readers know the exact number of injuries so I removed all of the data from the chart.
Here are three temptations that I did not implement:
Not include the legend
Not include the text labels, which are rendered redundant by the brilliant idea of using the running guy
Alberto Cairo left a comment about "data decorations". This is a name he's using to describe something like the windshield-wiper chart I discussed the other day. It seems like the visual elements were purely ornamental and adds nothing to the experience--one might argue that the experience was worse than just staring at the data table.
It just happens that I have another example of such a chart, submitted by Xan. This one is from Consumer Reports, and illustrates some findings from a recent survey on what things air travellers hate most. Good luck figuring all this out!
A few of these ideas work, such as the complaints about leg room being tied to the seated passengers inside the plane. But then, the data about people hating middle seats is placed on the upper left corner between the left wing and the tail. All of the atypically shaped charts (the cloud, the triangle, the octaogon) seem to use the oft-criticized convention of coding the data onto just one dimension of these multi-dimensioned objects. I just find the organization of the text confusing and poorly structured.
Xan pulled something from a much older Consumer Reports. And they dared to use a boring bar chart:
A nice compromise would be to create some subsections under Airlines to group different types of complaints (stuff relating to seating, stuff about service, stuff about punctuality, etc.). Ask a designer to draw some icons (remember the NYT dog graphic!)