« June 2011 | Main | August 2011 »

A restaurant gem

Felix Salmon spoke highly of this Wall Street Journal chart, and I agree.


Why do I like this? Although it's a basic chart, they did many little things well.

  • They are brave enough to not print any of the actual data on the chart. In other words, no loss aversion.
  • The legend is integrated onto the chart, not banished to some corner or border, requiring readers to stray from the graph. For added effect, the A, B, C labels imitate the actual signs posted outside the restaurants.
  • Simple and effective use of colors
  • Sensible scales. It's even better if they would thin out the horizontal scale for the C rating, say make it 10-point intervals instead of 5-point intervals. Although this is hard to accomplish using conventional software, an axis with different intervals in different regions is surprisingly effective.
  • Using pencil-thin columns. The same chart with thicker columns would be both uglier and less effective.

(I'm not sure I like the up and right arrows on the axis titles. Is it better to remove the arrows and center the text?)


Tough love for chart infidelity

The New York Times Magazine published an article about marriage infidelities, which I didn't read, but it was popular enough that they did an online poll to obtain some instant feedback from readers. The result was shown in this cutesy graphic:


 Note that they plotted the number of responses rather than proportion of responders even though all the numbers are between 0 and 100 and could easily have been misread as percentages.

This chart is another good illustration of the self-sufficiency principle. There is no need to create a chart if all the data are printed onto the chart, and readers must look at the data to learn anything from it. Imagine the above chart without the data, and you'll see why the data labels are critical to this chart.

Below is a version in which I removed all the data labels, replacing them with an axis:


The two pink slabs were thrown in for a little chart-check. According to the designer, 6+6+6 is larger than 20. How is this so? Look at a blow-up of the "God says otherwise" bar of hearts:


The one whole heart in each bar ruins the string of half hearts. Little things can introduce infidelities into charts.

Taking pages from Gelman

Andrew Gelman has posted a few times recently on graphics-related topics. Here are the links, and my reaction:

  • He and I both think line charts are under-valued. Some people really, really hate using line charts when the horizontal axis consists of categorical data; as I've explained repeatedly (see posts on profile charts), by drawing lines to connect these categories, all I'm doing is to expose our eye movements while reading the bar charts that are often the default option for such data.
  • Ag_militaryspend Regarding a very "ugly" chart on factors affecting military spending, Gelman wrote the following spot-on sentences:
    • Just as a lot of writing is done by people without good command of the tools of the written language, so are many graphs made by people who can only clumsily handle the tools of graphics. The problem is made worse, I believe, because I don't think the creators of the graph thought hard about what their goals were.
    • That last point is exactly why I placed at the top of the Trifecta checkup the question of figuring out what is the key question the chart is supposed to address.
  • Seems to me the above chart presents in a complicated fashion a simplistic model of military spending share: military spend = military share of GDP x GDP, therefore relative military spend increases if either relative GDP increases or relative military share of GDP increases (or both). So, in each period, all we need to know is whether the US has increased/decreased its military share of GDP relative to the rest of the world, and whether the US has increased/decreased its GDP relative to the rest of the world. End of story.
  • 201104_community_call_map_441Some work on visually displaying telephone call data. Gelman's correspondent nominated this and another chart printed in the NYT as worst of the year. Chris Volinsky disagrees and points us to a nice article. The map shown here is definitely not close to being worst of the year. The other chart, with a lot of lines, is pretty bad - and raises the question I asked the other day: what makes a "pretty" chart?
  • Regarding the AT&T analysis, I have a few questions for the researchers: how representative is AT&T data especially at county level? do we have to worry about nonrandom missing data? Also, how should one interpret the large swath of the Midwest which had the "background color"? Is it that there weren't sufficient data or that the data showed that all of those states belong together in one super-cluster? Finally, how does a shift in the "similarity" metric change the look of the map?

The meaning of pretty pictures and the case of 15 scales

When we call something a "pretty picture", what do we mean? 

Based on the evidence out there, it would seem like "pretty" means one or more of the following:

  • unusual: not your Grandma's bar chart or line chart
  • visually appealing: say, have irregular shapes, lots of colors, curved lines and so on
  • complex: if you don't get the point right away, the chart must be smart, and must contain a lot of information
  • data-rich: a variant of complex


I pondered that question while staring at this chart, reprinted in the NYT Magazine, in which they pitched a new book by Craig Robinson called "Fip Flop Fly Ball".  According to the editors, the book is a "beautiful, number-crunched (sic) combination of statistical and graphic-design geekery". So here's Exhibit A:

Nytm_flipflop This chart is supposed to tell us whether big payroll equals success in Major League Baseball, and success is measured variously by making the playoffs, making the championship series or winning the championship. It nicely uses a relatively long time horizon of 15 years.

The problem: how are we supposed to learn the answer to the question?

To learn it, we have to go through these steps:

Read the fine print under the title that tells us the vertical scale is the rank by payroll, so within each season, the top spender is at the top, and the bottom spender at the bottom. (Strictly speaking, there are 15 different scales, see discussion below.)

Figure out that the black row has all of the championship teams aligned at the same vertical level.

Realize that the more teams that are listed below the black line, the bigger the payroll of the championship team in that season.

Alternatively, the more teams that are found above the black line, the smaller the payroll is of the winning team that year.

From that, we see that for almost every season in the last 15 years, the winner comes from a relatively free-spending team. Florida in 2003 is a big outlier.


Maybe that isn't too bad. Now, try to interpret the blue boxes, which label all the playoff teams in every season. Is it that playoff teams also are bigger spenders than non-playoff teams?

To learn this, try the following step:

Ignore the relative height of the columns from season to season, and focus only on the relative positions of the blue slots within each column.

Are these blue slots more likely to be crowded towards the top of the column than the bottom?

The answer should be obvious but why does it feel so hard?


You may be confused by the vertical scale. Is it the case that in 2003, the entire league decided to splurge on spending? Does the protruding tower in 2003 indicate especially high payrolls?

No, it doesn't. It turns out there are really 15 separate vertical scales on this one chart; each column has to be viewed separately. There is a ranking within each column but the relative height  from one column to the next means nothing. Each column is hinged to the black row which is the rank by payroll of the championship team in that season.

The decision to anchor the columns in this way is what dooms this chart. In the junkart version below, I reversed this decision and ended up with a much clearer picture:


It's now clear that almost all the playoff teams come from the top quartile or top third of the table in terms of payroll. In more recent years, the correlation between spending and success seems less assured - perhaps it's partly a result of the analytics revolution, as nicely portrayed in Moneyball. It is still true that any team in the bottom third of the payroll scale has little chance to making the playoffs; however, once the smaller-payroll team makes the playoffs, it appears that they do well, as in three of the last four seasons, a small-payroll team has made the finals.

Note that I grayed out the four cells at the bottom left. There were only 28 teams before 1997. I also removed the names of the teams that didn't make the playoffs, which serves no purpose in a chart like this.


That's the descriptive statistics. It's really hard to draw robust conclusions from such data. You can say it's harder for small-payroll teams to have consistently great performance in the regular season but easier in a short playoff series - so in a sense, we are looking at luck, not skill.

But could it be that those small-payroll teams, given that they made the playoffs, must have some usual success in that season, perhaps because they discovered some young talent that cost next to nothing, and so the fact that they made the playoffs despite the smaller payroll is a good predictor that they would do well in the playoff?

The other important issue to realize is that by plotting the rank of payroll, rather than true payroll, the scale of payroll differences has been taken out of the picture. The team listed at the median rank most likely spent much less than half of the team listed at the top of the table. If you grab the actual payroll amounts, there is much more you can do to display this data.