« December 2011 | Main | February 2012 »

Little orange circles spell trouble

Economists Banerjee and Duflo are celebrated for their work on global poverty but they won't be winning awards for the graphics on the website that supports their book "Poor Economics" (link). Thanks to a reader  for the pointer.

Here is one view ("radial") of the data:

Poore_timeuse1

Here is the so-called linear view of the same data:

Poore_timeuse2

And here is what the data really look like:

Redo_timeuse

***

The linear view contains a host of misleading features. The length of the row of bubbles does not indicate how many total hours the individual spends doing the selected chores, not even directionally. Instead, the length is a proxy for the number of bubbles per person, which is a measure of the variety of chores - the more chores, the longer the chain of bubbles.

And then, you ask where the color legend is. This information is hidden in the mouse-over effect, or in the drop-down menu by clicking "compare daily activities". It's a bit of wasted energy to have to press on various bubbles in order to understand which color is mapped to which chore, isn't it?

The chart also contains a brainteaser. What is the logic behind the order of the individuals?

And finally, what to make of the little orange circles dancing around the chart? (They also decorate the radial view.) Go to the page and try clicking.

***

Our version tries to do "less is more". One of the tricky features of this dataset is the profusion of little categories capturing daily activities that occupy 0.5 to 1 hour of someone's time. Instead of printing every activity, I chose to bundle all activities requiring less than 1 hour into a single "Others" category.

The "total" category is  a reference level. One can also choose to print white boxes around every single bar and eliminate that row.

 

 

 


Why were they laughing?

Felix Salmon linked to this whimsical chart, featuring the frequency of laughter at Federal Reserve's FOMC meetings on the lead up to the bubble.

FOMC-Funnies-v2.0

The Daily Stag Hunt blog originated this chart (link), and they juxtaposed it with the Case-Shiller 20-city home price index to make the case that the Fed members were laughing all the way to, er, the McMansion.

Cs20index

***

There is little doubt that the underlying narrative is correct, that the Fed governors did not see the bubble, and failed to respond to it appropriately. It is both tempting and amusing to find correlations of this type that would make this point clear.

But as I have discussed elsewhere, one must be extremely careful when looking at correlations of time-series data. Consider the fact that the FOMC laughter shows a unidirectional (up) pattern throughout the period being depicted. Consider, next, that for much of this period, the U.S. (and much of the world) was riding a massive bubble. These two facts alone will guarantee that we can find hundreds of data series that will show very strong correlation with the FOMC laughter track. Pick up any economic data series from that era, whether it is home sales, mortgages, retail sales, stock market prices, and so on, one will find a unidirectional (up) pattern.

So we must dig into the data more to understand the real connection between FOMC laughter and average large-city home prices.

***

Redo_laughterThe top chart on the right shows the expected correlation between the Case Shiller Index and laughter. Note that the Case Shiller data is an index with Jan 2000 being 100. I used a 4-meeting moving average to smooth out the fluctuations in the laughter data.

Because meetings are 1 to 2 months apart, one should expect the participants to be reacting to the latest data, i.e. the change in Case Shiller index in the previous 1 or 2 months. What the top chart shows is something different: it's all relative to Jan 2000. (Hard to imagine someone at the Feb 2005 meeting mulling over the 80% increase since Jan 2000, rather than the increase since their last meeting.) Thus, I produced the middle chart, which is a scatter plot of the one-month changes in Case Shiller index against average number of laughs.

The case for strong correlation has disappeared. It now looks like the laughs were most acute when the home prices declined versus the prior month.

The bottom chart is the same as the middle chart, except that I looked at home price changes two months apart. We observe an identical pattern.

***

This pattern shouldn't surprise us because it's actually in the original charts. During the last few meetings of 2006, the index has already stopped rising while the laughter continued to grow. We didn't pay attention because we were mesmerized by the long period of steady increases on both data sets.

 


A counterfeit data graphic

Just as there are counterfeit handbags that look like the real thing, there are fake data graphics that look like the real thing. Reader San C. shows us an example of this (found on the All Things D blog here):

Forrester_data

At first sight, this appears to be a bubble chart. Further, the legend is telling us that the colors are meaningful. So, the bubbles correspond to different types of data, grouped by color, and the size of the bubbles represents the relative level of concern expressed by respondents.

That would be true if we were looking at the "real thing". But this is a counterfeit. How do we know it's fake?

Forrestor_creditcardFirst, the size of the bubbles was not sized to scale. Just look at the Social Security Number versus credit card number (shown on the right). A 1% difference shouldn't be visible on this sort of chart but the credit card bubble is clearly smaller.

Second, the legend gives the impression that the tint of the color carries information. However, it really doesn't. I lined up all of the green bubbles in order of decreasing data, and couldn't find any pattern to the tints.

Forrester_greendots

There also isn't a clear pattern in the location of specific bubbles. Were they randomly scattered onto the chart?

In summary, this is the ultimate non-self-sufficient chart. If we remove the actual printed data from this chart, we're left with nothing.

***

This data can be put onto a grouped bar chart dot plot.

Redo_forrester

***

Aside from the graphical aspects, we should pay attention to some statistical issues.

The article does not stress enough the potential bias of this survey. The survey is an online survey of Internet users. Their average opinion about Internet-related issues should not be used to represent the opinion of the average American without careful consideration. There is a good possibility that people who have concerns about Internet privacy are less likely to be found on the Internet.

I also wonder if survey takers understand clearly the poll question. What does it mean by a company "accessing my personal information"? Does it mean I give the company the information (such as my credit card number) because I need to complete a transaction? Does it mean the company purchases such information from an information exchange? And if so, with or without informing me?

In particular, I don't understand the 28% who say they are not concerned about companies accessing their social security number.

 


Flaming out

Long-time reader Omegatron tells us about this report card issued by ACLU on the liberal-ness of different potential candidates for U.S. President. (link to PDF)

Aclu_reportcard

Perhaps because the designer realizes what a mess this chart is, he decides to greet readers with the legend text, typically shoved to the sides of a chart. "Ratings are determined on zero-to-four torch scale." This leaves me scratching my head because I see flaming torches and I see black torches without flames. Is it really an 8-point scale with half points?

Eventually, I realized that the torches without flames are part of the background of this chart, they are just placeholders to tell us up to four points can be scored on each issue. We should really be counting the orange flames!

Or perhaps the designer is cleverly differentiating zero torches, which is a negative rating, from big question mark, which indicates the candidate has no stated position, thus no rating on that issue. Keeping those two situations separate is very important. In the following reconstruction, instead of filling all of the zero-rating boxes (for which there are many), I leave zero ratings blank but insert a gray box to indicate "no rating".

Redo_acluelections

I haven't reproduced the whole chart with all the issues but you get the idea of where I'm going with it.

For those supporting liberal viewpoints, this is a deeply disappointing chart. It shows the Democratic President rates below some of the Republican (Libertarian) contenders on supporting liberal causes. It also says much that between Gingrich, Perry, Romney and Santorum, they accumulated a total of four points across all these categories, equal to what one candidate can earn in just one category. (This type of information becomes clearer when the candidates are sorted in a meaningful way, as opposed to alphabetical.)

 


Two tales of one dataset

The following two charts plot the same data, the yearly amount of rainfall in Los Angeles over the last two decades or so. (The original chart, on the left, came from the LA Times. Link here.) Why do they give such different impressions?

 

La_rainfall_v2

The left chart appears very busy despite the simplest data set, thanks to printing the entire set of 21 numbers, each to the second decimal point on the chart itself. The axis labels do not provide extra information when all the data has been included, and it is highly unlikely any reader of the newspaper requires precise measurements of rainfall.

Chances are the reader is interested in how the general trend of rainfall in recent years compared to the historical pattern. Credit the designer for pulling the relevant data, including the average, maximum and minimum rainfall on record. On the right chart, all three historical numbers are incorporated into the axis so that they could act as reference levels.

Not to mention the axes were switched to preserve the usual placement of time on the horizontal axis.

The bar chart emphasizes the absolute values of each rainfall amount while the dot plot displays the differences between each measurement and the historical average. On the right chart, it is easy to observe whether any year's rainfall is above or below the expectation. Over the last two decades, it appears there were about as many years above as below the average, and the overages and underages do not exhibit any clustering.

***

From a Trifecta checkup perspective, we find that the choice of data is not attuned to the purpose of the chart. The right data has been collected; a small transformation would have made all the difference. The selection of the chart type also fails to address the purpose of the chart.


The war on infographics

Mcardle_infogrinfographicMegan McArdle (The Atlantic) is starting a war on the infographics plague. (Here, infographics means infographics posters.)  Excellent debunking, and absorbing reading.

It's a long post. Her overriding complaint is that designers of these posters do not verify their data. The "information" shown on these charts is frequently inaccurate, and the interpretation is sloppy.

In the Trifecta checkup framework, this data deficiency breaks the link between the intent of the graphic and the (inappropriate) data being displayed. (Most infographics posters also fail to find the right chart type for the data being displayed.)

While I have often raised similar complaints in the past -- and my current stance is link to good infographics posters only (which explains their scarcity on this blog), one of the significant contributions of the infographics "plague" is the status-hiking of the story-telling perogative. Unfortunately, this plague is yet another case of elevating stories above the data, which (to a lesser extent) is a complaint that Andrew Gelman and I shared about the "Freakonomics" trend. (See here, and Andrew's further comments.)

This doesn't stop McArdle from adding her own contribution to the infographics plague... the poster shown on the right.

Do yourself a favor and read her post in full. Link here.


Nothing works while visualizing a poll 1

Reader John G. sent an example of a spectacular failure in automated chart generation (via a LinkedIn poll result display):

LinkedinpollAlmost nothing works at all.

The survey question should be placed directly above and inside the box containing the bar chart.

Zero means zero, not the unspecified small values indicated by tiny bars.

Any pollster will use the poll result to make general statements about some predetermined group of people, and so the emphasis should be on the proportion of responders as opposed to the absolute number of responders selecting any given response.

Half-persons do not exist despite what the (excessive) gridlines imply.

The color scheme chosen for the bar chart conflicts with that chosen for the demographic data shown below.

The term "overall demographics" should be replaced by "all respondents". Demographics can be placed above the bottom three sections which all pertain to demographic data.

It's not clear if the gender symbols scale with the proportion of respondents but in any case, all readers will be forced to read the small-font data labels at the bottom to make sense of the data.

Nor does one know what job title a responder holds if he is not a "manager". It is also curious as to why "manager" is given the darker tint in the "overall demographics" section but the lighter tint in the "YES!" section.

The "all respondents" data should be replicated on each of the two bottom charts since they act as a reference point to interpret the demographics for each response.

Do away with the Age column since no data is available.

Finally, as John pointed out:

With only three respondents answering YES to the question, how can the distribution be 50% managers and 50% everybody else?  With four total respondents, how can the overall demographic be about 66% managers and 33% everybody else?

Perhaps one of the three left the question unanswered? (typically, a separate category of "unknown" would be created for this purpose.) Perhaps half-persons do exist in this universe?

***
In what follows, I'll assume that 1 of the 3 people saying "YES!" is a manager, while the one responding "What is an arc flash survey for?" is also a manager, making two managers out of four respondents.

Jc_linkedinpollThe following display uses the small-multiples principle, presenting subgroup data the same way as the overall data. The emphasis is on proportion of responders.

The "female" category is labelled "NA" as opposed to zero because we do not know how females would respond since no woman filled out the survey; this is not the same as saying females would not select "YES!" or "What is the survey for?".

The "Overall" section is both a data display itself and a reference point for the other sections of the chart.

The horizontal orientation has the advantage of keeping the bars close together - but it has the disadvantage of awkward positioning of the names of the responses. Conversely, if flipped vertically, the names of the responses would be neatly displayed but it would be difficult to keep the columns close together.