Visual story-telling: do you know or do you think?

One of the most important data questions of all time is: do you know? or do you think?

And one of the easiest traps to fall into is: I think, therefore I know.

***

Visual story-telling can be great but it can also mislead. Deception sometimes happens when readers are nudged to "fill in the blanks" with stuff they think they know, but they don't.

A Twitter reader asked me to look at the map in this Los Angeles Times (paywall) opinion column.

Latimes_lifeexpectancy_postcovid

The column promptly announces its premise:

Years of widening economic inequality, compounded by the pandemic and political storm and stress, have given Americans the impression that the country is on the wrong track. Now there’s empirical data to show just how far the country has run off the rails: Life expectancies have been falling.

The writer creates the expectation that he will reveal evidence in the form of data to show that life expectancies have been driven down by economic inequality, pandemic, and politics. Does he succeed?

***

The map portrays average life expectancy (at birth) for some mysterious, presumably very recent, year for every county in the United States. From the color legend, we learn that the bottom-to-top range is about 20 years. There is a clear spatial pattern, with the worst results in the south (excepting south Florida).

The choice of colors is telling. Red and blue on a U.S. map has heavy baggage, as they signify the two main political parties in the country. Given that the author believes politics to be a key driver of health outcomes, the usage of red and blue here is deliberate. Throughout the article, the columnist connects the lower life expectancies in southern states to its politics.

For example, he said "these geographical disparities aren't artifacts of pure geography or demographics; they're the consequences of policy decisions at the state level... Of the 20 states with the worst life expectancies, eight are among the 12 that have not implemented Medicaid expansion under the Affordable Care Act..."

Casual readers may fall into a trap here. There is nothing on the map itself that draws the connection between politics and life expectancies; the idea is evoked purely through the red-blue color scheme. So, as readers, we are filling in the blanks with our own politics.

What could have been done instead? Let's look at the life expectancy map side by side with the map of the U.S. 2020 Presidential election.

Junkcharts_lifeexpectancy_elections

Because of how close recent elections have been, we may think the political map has a nice balance of red and blue but it isn't. The Democrats' votes are heavily concentrated in densely-populated cities so most of the Presidential election map is red. When placed next to each other, it's obvious that politics don't explain the variance in life expectancy well. The Midwest is deep red and yet they have above average life expectancies. I have circled out various regions that contradict the claim that Republican politics drove life expectancies down.

It's not sufficient to point to the South, in which Republican votes and life expectancy are indeed inversely correlated. A good theory has to explain most of the country.

***

The columnist also suggests that poverty is the cause of low life expectancy. That too cannot be gleaned from the published map. Again, readers are nudged to use their wild imagination to fill in the blank.

Data come to the rescue. Here is a side-by-side comparison of the map of life expectancies and the map of median incomes.

Junkcharts_lifeexpectancy_income

A similar conundrum. While the story feels right in the South, it fails to explain the northwest, Florida, and various other parts of the country. Take a look again at the circled areas. Lower income brackets are also sometimes associated with high life expectancies.

***

The author supplies a third cause of lower life expectancies: Covid-19 response. Because Covid-19 was the "most obvious and convenient" explanation for the loss of life expectancy during the pandemic, this theory suggests that the red areas on the life expectancy map should correspond to the regions most ravaged by Covid-19.

Let's see the data.

Junkcharts_lifeexpectancy_covidcases

The map on the right shows the number of confirmed cases until June 2021. As before, the correlation holds somewhat in the South but there are notable exceptions, e.g. the Midwest. We also have states with low Covid-19 cases but below-average life expectancy.

***

What caused the decline of life expectancy in the U.S. - which began before the pandemic, and has continued beyond - is highly complex, beyond what a single map or a pair of maps or a few pairs of maps could convey. Showing a red-blue map presents a trap for readers to fall into, in which they start thinking, without knowing.

 


Area chart is not the solution

A reader left a link to a Wiki chart, which is ghastly:

House_Seats_by_State_1789-2020_Census

This chart concerns the trend of relative proportions of House representatives in the U.S. Congress by state, and can be found at this Wikipedia entry. The U.S. House is composed of Representatives, and the number of representatives is roughly proportional to each state's population. This scheme actually gives small states disporportional representation, since the lowest number of representatives is 1 while the total number of representatives is fixed at 435.

We can do a quick calculation: 1/435 = 0.23% so any state that has less than 0.23% of the population is over-represented in the House. Alaska, Vermont and Wyoming are all close to that level. The primary way in which small states get larger representation is via the Senate, which sits two senators per state no matter the size. (If you've wondered about Nate Silver's website: 435 Representatives + 100 Senators + 3 for DC = 538 electoral votes for U.S. Presidental elections.)

***

So many things have gone wrong with this chart. There are 50 colors for 50 states. The legend arranges the states by the appropriate metric (good) but in ascending order (bad). This is a stacked area chart, which makes it very hard to figure out the values other than the few at the bottom of the chart.

A nice way to plot this data is a tile map with line charts. I found a nice example that my friend Xan put together in 2018:

Xang_cdcflu_tilemap_lines

A tile map is a conceptual representation of the U.S. map in which each state is represented by equal-sized squares. The coordinates of the states are distorted in order to line up the tiles. A tile map is a small-multiples setup in which each square contains a chart of the same design to faciliate inter-state comparisons.

In the above map, Xan also takes advantage of the foregrounding concept. Each chart actually contains all 50 lines for every state, all shown in gray while the line for the specific state is bolded and shown in red.

***

A chart with 50 lines looks very different from one with 50 areas stacked on each other. California, the most populous state, has 12% of the total population so the line chart has 50 lines that will look like spaghetti. Thus, the fore/backgrounding is important to make sure it's readable.

I suspect that the designer chose a stacked area chart because the line chart looked like spaghetti. But that's the wrong solution. While the lines no longer overlap each other, it is a real challenge to figure out the state-level trends - one has to focus on the heights of the areas, rather than the boundary lines.

[P.S. 2/27/2023] As we like to say, a picture is worth a thousand words. Twitter reader with the handle LHZGJG made the tile map I described above. It looks like this:

Lhzgjg_redo_houseapportionment

You can pick out the states with the key changes really fast. California, Texas, Florida on the upswing, and New York, Pennsylvania going down. I like the fact that the state names are spelled out. Little tweaks are possible but this is a great starting point. Thanks LHZGJG! ]

 


Getting simple charts right

Ian K. submitted this chart on Twitter:

Iankos_chicagocops

The chart comes from a video embedded in this report (link) about Chicago cops leaving their jobs.

Let's start with the basics. This is an example of a simple line chart illustrating a time series of five observations. The vertical axis starts at 10,000 instead of 0. With this choice, the designer wants to focus on the point-to-point change in values, rather than its relation to the initial value.

Every graph has add-ons that assist cognition. On this chart, we have axis labels, gridlines and data labels. Every add-on increases reading time so we should be sparing.

First consider the gridlines. In the following chart, I conduct a self-sufficiency test by removing the data labels from the chart:

Redo_wgn9chicagocops_junkcharts_selfsufficiency

You can see that the last three values present no problems. The first two, especially the first value, are hard to read - because the top gridline is missing! The next chart restores the bounding gridline, so you can see the difference that one small detail can make:

Redo_wgn9chicagocops_junkcharts_addedgridline

***

Next, let's compare the following versions of the chart. The left one contains data labels without gridlines and axis labels. The right one has the gridlines and axis labels but no data labels.

Redo_wgn9chicagocops_gridlinesdatalabels

The left chart prints the entire dataset onto the chart. The reader in essence is reading the raw data. That appears to be the intention of the chart designer as the data labels are in large size, placed inside shiny white boxes. The level of the boxes determines the reader's perception as those catch more of our attention than the dots that actually represent the data.

The right chart highlights the dots and the lines between them. The gridlines are way too thick and heavy so as to distract rather than abet. This chart presumes that the reader isn't that interested in the precise numbers as she is in the trend.

***

As Ian pointed out, one of the biggest problems with this chart is the appearance of even time intervals when all except one of the date values are January. This seemingly innocent detail destroys the chart. The line segments of the chart encodes the pre-post change in the staffing numbers. For most of the line segments, the metric is year-on-year change but the last two line segments on the right show something else: a 19-month change, followed by a 5-month change.

I did the following analysis to understand how big of a staffing problem CPD faces.

Redo_wgn9chicagocops_trendanalysis
First I restored the January 2022 time value, while shifting the Aug 2022 value to its rightful place on the time axis. Next, I added the dashed brown line, which represents a linear extension of the trend seen between January 2020-2021, before the sudden dip. We don't know what the true January 2022 value is but the projected value based on past trend is around 12,200. By August, the projected value is around 11,923, about 300 above the actual value of 11,611. By January 2023, the projected value is almost exactly the same as the actual value.

This linear trending analysis is likely too simplistic but it offers a baseline to start thinking about what the story is. The long-term trend is still down but the apparent dip in 2022 may not be meaningful.

 

 


Dual axes: a favorite of tricksters

Twitter readers directed me to this abomination from the St. Louis Fed (link).

Stlouisfed_military_spend

This chart is designed to paint the picture that China is this grave threat because it's been ramping up military expenditure so much so that it exceeded U.S. spending since the 2000s.

Sadly, this is not what the data are suggesting at all! This story is constructed by manipulating the dual axes. Someone has already fixed it. Here's the same data plotted with a single axis:

Redo_military_spend

(There are two set of axis labels but they have the same scale and both start at zero, so there is only one axis.)

Certainly, China has been ramping up military spending. Nevertheless, China's current level of spending is about one-third of America's. Also, imagine the cumulative spending excess over the 30 years shown on the chart.

Note also, the growth line of U.S. military spending in this period is actually similarly steep as China's.

***

Apparently, the St. Louis Fed is intent on misleading its readers. Even though on Twitter, they acknowledged people's feedback, they decided not to alter the chart.

Stlouisfed_militaryexpenditure_tweet

If you click through to the article, you'll find the same flawed chart as before so I'm not sure how they "listened". I went to Wayback Machine to check the first version of this page, and I notice no difference.

***

If one must make a dual axes chart, it is the responsibility of the chart designer to make it clear to readers that different lines on the chart use different axes. In this case, since the only line that uses the right hand side axis is the U.S. line, which is blue, they should have colored the right hand axis blue. Doing that does not solve the visualization problem; it merely reduces the chance of not noticing the dual axes.

***

I have written about dual axes a lot in the past. Here's a McKinsey chart from 2006 that offends.


A graphical compass

A Twitter user pointed me to this article from Washington Post, ruminating about the correlation between gas prices and measures of political sentiment (such as Biden's approval rating or right-track-wrong-track). As common in this genre, the analyst proclaims that he has found something "counter intuitive".

The declarative statement strikes me as odd. In the first two paragraphs, he said the data showed "as gas prices fell, American optimism rose. As prices rose, optimism fell... This seems counterintuitive."

I'm struggling to see what's counterintuitive. Aren't the data suggesting people like lower prices? Is that not what we think people like?

The centerpiece of the article concerns the correlation between metrics. "If two numbers move in concert, they can be depicted literally moving in concert. One goes up, the other moves either up or down consistently." That's a confused statement and he qualifies it by typing "That sort of thing."

He's reacting to the following scatter plot with lines. The Twitter user presumably found it hard to understand. Count me in.

Washingtonpost_gasprices

Why is this chart difficult to grasp?

The biggest puzzle is: what differentiates those two lines? The red and the gray lines are not labelled. One would have to consult the article to learn that the gray line represents the "raw" data at weekly intervals. The red line is aggregated data at monthly intervals. In other words, each red dot is an average of 4 or 5 weekly data points. The red line is just a smoothed version of the gray line. Smoothed lines show the time trend better.

The next missing piece is the direction of time, which can only be inferred by reading the month labels on the red line. But the chart without the direction of time is like a map without a compass. Take this segment for example:

Wpost_gaspricesapproval_directionoftime

If time is running up to down, then approval ratings are increasing over time while gas prices are decreasing. If time is running down to up, then approval ratings are decreasing over time while gas prices are increasing. Exactly the opposite!

The labels on the red line are not sufficient. It's possible that time runs in the opposite direction on the gray line! We only exclude that possibility if we know that the red line is a smoothed version of the gray line.

This type of chart benefits from having a compass. Here's one:

Wpost_gaspricesapproval_compass

It's useful for readers to know that the southeast direction is "good" (higher approval ratings, lower gas prices) while the northwest direction is "bad". Going back to the original chart, one can see that the metrics went in the "bad" direction at the start of the year and has reverted to a "good" direction since.

***

What does this chart really say? The author remarked that "correlation is not causation". "Just because Biden’s approval rose as prices dropped doesn’t mean prices caused the drop."

Here's an alternative: People have general sentiments. When they feel good, they respond more positively to polls, as in they rate everything more positively. The approval ratings are at least partially driven by this general sentiment. The same author apparently has another article saying that the right-track-wrong-track sentiment also moved in tandem with gas prices.

One issue with this type of scatter plot is that it always cues readers to make an incorrect assumption: that the outcome variables (approval rating) is solely - or predominantly - driven by the one factor being visualized (gas prices). This visual choice completely biases the reader's perception.

P.S. [11-11-22] The source of the submission was incorrectly attributed.


Light entertainment: words or visuals

Via twitter, no words for this one:

image from junkcharts.typepad.com

I need a poll function for this.  What do we think happened here?

  • Intentional fake news
  • Intern didn't get paid enough
  • Call of the wild
  • Drunk or high
  • Quiet quitting
  • Doing a Dan Ariely ("I didn't know I have to check the data")
  • Others (please comment)

***

One last thing!

If you think you knew what the real numbers are, think again.

 


People flooded this chart presented without comment with lots of comments

The recent election in Italy has resulted in some dubious visual analytics. A reader sent me this Excel chart:

Italy_elections_RDC-M5S

In brief, an Italian politician (trained as a PhD economist) used the graph above to make a point that support of the populist Five Star party (M5S) is highly correlated with poverty - the number of people on RDC (basic income). "Senza commento" - no comment needed.

Except a lot of people noticed the idiocy of the chart, and ridiculed it.

The chart appeals to those readers who don't spend time understanding what's being plotted. They notice two lines that show similar "trends" which is a signal for high correlation.

It turns out the signal in the chart isn't found in the peaks and valleys of the "trends".  It is tempting to observe that when the blue line peaks (Campania, Sicilia, Lazio, Piedmonte, Lombardia), the orange line also pops.

But look at the vertical axis. He's plotting the number of people, rather than the proportion of people. Population varies widely between Italian provinces. The five mentioned above all have over 4 million residents, while the smaller ones such as Umbira, Molise, and Basilicata have under 1 million. Thus, so long as the number of people, not the proportion, is plotted, no matter what demographic metric is highlighted, we will see peaks in the most populous provinces.

***

The other issue with this line chart is that the "peaks" are completely contrived. That's because the items on the horizontal axis do not admit a natural order. This is NOT a time-series chart, for which there is a canonical order. The horizontal axis contains a set of provinces, which can be ordered in whatever way the designer wants.

The following shows how the appearance of the lines changes as I select different metrics by which to sort the provinces:

Redo_italianelections_m5srdc_1

This is the reason why many chart purists frown on people who use connected lines with categorical data. I don't like this hard rule, as my readers know. In this case, I have to agree the line chart is not appropriate.

***

So, where is the signal on the line chart? It's in the ratio of the heights of the two values for each province.

Redo_italianelections_m5srdc_2

Here, we find something counter-intuitive. I've highlighted two of the peaks. In Sicilia, about the same number of people voted for Five Star as there are people who receive basic income. In Lombardia, more than twice the number of people voted for Five Star as there are people who receive basic income. 

Now, Lombardy is where Milan is, essentially the richest province in Italy while Sicily is one of the poorest. Could it be that Five Star actually outperformed their demographics in the richer provinces?

***

Let's approach the politician's question systematically. He's trying to say that the Five Star moement appeals especially to poorer people. He's chosen basic income as a proxy for poverty (this is like people on welfare in the U.S.). Thus, he's divided the population into two groups: those on welfare, and those not.

What he needs is the relative proportions of votes for Five Star among these two subgroups. Say, Five Star garnered 30% of the votes among people on welfare, and 15% of the votes among people not on welfare, then we have a piece of evidence that Five Star differentially appeals to people on welfare. If the vote share is the same among these two subgroups, then Five Star's appeal does not vary with welfare.

The following diagram shows the analytical framework:

Redo_italianelections_m5srdc_3

What's the problem? He doesn't have the data needed to establish his thesis. He has the total number of Five Star voters (which is the sum of the two yellow boxes) and he has the total number of people on RDC (which is the dark orange box).

Redo_italianelections_m5srdc_4

As shown above, another intervening factor is the proportion of people who voted. It is conceivable that the propensity to vote also depends on one's wealth.

So, in this case, fixing the visual will not fix the problem. Finding better data is key.


Here's a radar chart that works, sort of

In the same Reuters article that featured the speedometer chart which I discussed in this blog post (link), the author also deployed a small multiples of radar charts.

These radar charts are supposed to illustrate the article's theme that "European countries are racing to fill natural gas storage sites ahead of winter."

Here's the aggregate chart that shows all countries:

Reuters_gastorage_radar_details

In general, I am not a fan of radar charts. When I first looked at this chart, I also disliked it. But keep reading because I eventually decided that this usage is an exception. One just needs to figure out how to read it.

One reason why I dislike radar charts is that they always come with a lot of non-data-ink baggage. We notice that the months of the year are plotted in a circle starting at the top. They marked off the start of the war on Feb 24, 2022 in red. Then, they place the dotted circle, which represents the 80% target gas storage amount.

The trick is to avoid interpreting the areas, or the shapes of the blue and gray patches. I know, they look cool and grab our attention but in the context of conveying data, they are meaningless.

Redo_reuters_eugasradarall_1Instead of areas, focus on the boundaries of those patches. Don't follow one boundary around the circle. Pick a point in time, corresponding to a line between the center of the circle and the outermost circle, and look at the gap between the two lines. In the diagram shown right, I marked off the two relevant points on the day of the start of the war.

From this, we observe that across Europe, the gas storage was far less than the 80% target (recently set).

By comparing two other points (the blue and gray boundaries), we see that during February, Redo_reuters_eugasradarall_2gas storage is at a seasonal low, and in 2022, it is on the low side of the 5-year average. 

However, the visual does not match well with the theme of the article! While the gap between the blue and gray boundaries decreased since the start of the war, the blue boundary does not exceed the historical average, and does not get close to 80% until August, a month in which gas storage reaches 80% in a typical year.

This is example of a chart in which there is a misalignment between the Q and the V corners of the Trifecta Checkup (link).

_trifectacheckup_image

The question/message is that Europeans are reacting to the war by increasing their gas storage beyond normal. The visual actually says that they are increasing the gas storage as per normal.

***

As I noted before, when read in a particular way, these radar charts serve their purpose, which is more than can be said for most radar charts.

The designer made several wise choices:

Instead of drawing one ring for each year of data, the designer averaged the past 5 years and turned that into one single ring (patch). You can imagine what this radar chart would look like if the prior data were not averaged: hoola hoop mania!

Marawa-bgt

Simplifying the data in this way also makes the small multiples work. The designer uses the aggregate chart as a legend/how to read this. And in a further section below, the designer plots individual countries, without the non-data-ink baggage:

Reuters_gastorage_mosttofill

Thanks againto longtime reader Antonio R. who submitted this chart.

Happy Labor Day weekend for those in the U.S.!

 

 

 


Four numbers, not as easy as it seems

Longtime reader Aleksander B. wasn't convinced by the following chart shown at the bottom of AFP's infographic about gun control.

Afp_guns_bottom

He said:

Finally I was able to figure who got some support from NRA. But as a non-US citizen it was hard to get why 86% of republican tag points to huge red part. Then I figured out that smaller value of alpha channel codes the rest of republicans. I think this could be presented in some better way (pie charts are bad in presenting percentages of some subparts of the same pie chart - but adding a tag for 86% while skipping the tag for remaining 14% is cruel).

It's an example of how a simple chart with just four numbers is so hard to understand.

***

Here is a different view of the same data, using a similar structure as the form I chose for this recent chart on Swedish trade balance (link).

Redo_junkcharts_afpguns