Here are the cool graphics from the election

There were some very nice graphics work published during the last few days of the U.S. presidential election. Let me tell you why I like the following four charts.

FiveThirtyEight's snake chart

Snake-1106pm

This chart definitely hits the Trifecta. It is narrowly focused on the pivotal questions of election night: which candidate is leading? if current projections hold, which candidate would win? how is the margin of victory?

The chart is symmetric so that the two sides have equal length. One can therefore immediately tell which side is in the lead by looking at the middle. With a little more effort, one can also read from the chart which side has more electoral votes based only on the called states: this would be by comparing the white parts of each snake. (This is made difficult by the top-bottom mirroring. That is an unfortunate design decision - I'd would have preferred to not have the top-bottom reversal.)

The length of each segment maps to the number of electoral votes for the particular state, and the shade of colors reflect the size of the advantage.

In a great illustration of less is more, by aggregating all called states into a single white segment, and not presenting the individual results, the 538 team has delivered a phenomenal chart that is refreshing, informative, and functional.

 Compare with a more typical map:

Electoral-map

 New York Times's snake chart

Snakes must be the season's gourmet meat because the New York Times also got inspired by those reptiles by delivering a set of snake charts (link). Here's one illustrating how different demographic segments picked winners in the last four elections.

 

Nytimes_partysupport_by_income

They also made a judicious decision by highlighting the key facts and hiding the secondary ones. Each line connects four points of data but only the beginning and end of each line are labeled, inviting readers to first and foremost compare what happened in 2004 with what happened in 2016. The middle two elections were Obama wins.

This particular chart may prove significant for decades to come. It illustrates that the two parties may be arriving at a cross-over point. The Democrats are driving the lower income classes out of their party while the upper income classes are jumping over to blue.

While the chart's main purpose is to display the changes within each income segment, it does allow readers to address a secondary question. By focusing only on the 2004 endpoints, one can see the almost linear relationship between support and income level. Then focusing on the 2016 endpoints, one can also see an almost linear relationship but this is much steeper, meaning the spread is much narrower compared to the situation in 2004. I don't think this means income matters a lot less - I just think this may be the first step in an ongoing demographic shift.

This chart is both fun and easy to read, packing quite a bit of information into a small space.

 

Washington Post's Nation of Peaks

The Post prints a map that shows, by county, where the votes were and how the two Parties built their support. (Link to original)

Wpost_map_peaks

The height represents the number of voters and the width represents the margin of victory. Landslide victories are shown with bolded triangles. In the online version, they chose to turn the map sideways.

I particularly like the narratives about specific places.

This is an entertaining visual that draws you in to explore.

 

Andrew Gelman's Insight

If you want quantitative insights, it's a good idea to check out Andrew Gelman's blog.

This example is a plain statistical graphic but it says something important:

Gelman_twopercent

There is a lot of noise about how the polls were all wrong, the entire polling industry will die, etc.

This chart shows that the polls were reasonably accurate about Trump's vote share in most Democratic states. In the Republican states, these polls consistently under-estimated Trump's advantage. You see the line of red states starting to bend away from the diagonal.

If the total error is about 2%, as stated in the caption of the chart, then the average error in the red states must have been about 4%.

This basic chart advances our understanding of what happened on election night, and why the result was considered a "shock."

 

 


An example of focusing the chart on a message

Via Jimmy Atkinson on Twitter, I am alerted to this chart from the Wall Street Journal.

Wsj_fiscalconstraints

The title of the article is "Fiscal Constraints Await the Next President." The key message is that "the next president looks to inherit a particularly dismal set of fiscal circumstances." Josh Zumbrun, who tipped Jimmy about this chart on Twitter, said that it is worth spending time on.

I like the concept of the chart, which juxtaposes the economic condition that faced each president at inauguration, and how his performance measured against expectation, as represented by CBO predictions.

The top portion of the graphic did require significant time to digest:

Wsj_fiscalconstraints_top

A glance at the sidebar informs me that there are two scenarios being depicted, the CBO projections and the actual deficit-to-GDP ratios. Then I got confused on several fronts.

One can of course blame the reader (me) for mis-reading the chart but I think dataviz faces a "the reader is always right" situation -- although there can be multiple types of readers for a given graphic so maybe it should say "the readers are always right."

I kept lapsing into thinking that the bold lines (in red and blue) are actual values while the gray line/area represents the predictions. That's because in most financial charts, the actual numbers are in the foreground and the predictions act as background reference materials. But in this rendering, it's the opposite.

For a while, a battle was raging in my head. There are a few clues that the bold red/blue lines cannot represent actual values. For one thing, I don't recall Reagan as a surplus miracle worker. Also, some of the time periods overlap, and one assumes that the CBO issued one projection only at a given time. The Obama line also confused me as the headline led me to expect an ugly deficit but the blue line is rather shallow.

Then, I got even more confused by the units on the vertical axis. According to the sidebar, the metric is deficit-to-GDP ratio. The majority of the line live in the negative territory. Does the negative of the negative imply positive? Could the sharp upward turn of the Reagan line indicate massive deficit spending? Or maybe the axis should be relabelled surplus-to-GDP ratio?

***

As I proceeded to re-create this graphic, I noticed that some of the tick marks are misaligned. There are various inconsistencies related to the start of each projection, the duration of the projection, the matching between the boxes and the lines, etc. So the data in my version is just roughly accurate.

To me, this data provide a primary reference to how presidents perform on the surplus/deficit compared to expectations as established by the CBO projections.

Redo_wsj_deficitratios

I decided to only plot the actual surplus/deficit ratios for the duration of each president's tenure. The start of each projection line is the year in which the projection is made (as per the original). We can see the huge gap in every case. Either the CBO analysts are very bad at projections, or the presidents didn't do what they promised during the elections.

 

 

 


Denver outspends everyone on this

Someone at the Wall Street Journal noticed that Denver's transit agency has outspent other top transit agencies, after accounting for number of rides -- and by a huge margin.

But the accompanying graphic conspires against the journalist.

Wsj_denverRail

For one thing, Denver is at the bottom of the page. Denver's two bars do not stand out in any way. New York's transit system dwarfs everyone else in both number of rides and total capital expenses and funding. And the division into local, state, and federal sources of funds is on the page, absorbing readers' mindspace for unknown reasons.

But Denver is an outlier, as can be seen here:

Redo_transit2

 


Super-informative ping-pong graphic

Via Twitter, Mike W. asked me to comment on this WSJ article about ping pong tables. According to the article, ping pong table sales track venture-capital deal flow:

Wsj_pingpongsales

This chart is super-informative. I learned a lot from this chart, including:

  • Very few VC-funded startups play ping pong, since the highlighted reference lines show 1000 deals and only 150 tables (!)
  • The one San Jose store interviewed for the article is the epicenter of ping-pong table sales, therefore they can use it as a proxy for all stores and all parts of the country
  • The San Jose store only does business with VC startups, which is why they attribute all ping-pong tables sold to these companies
  • Startups purchase ping-pong tables in the same quarter as their VC deals, which is why they focus only on within-quarter comparisons
  • Silicon Valley startups only source their office equipment from Silicon Valley retailers
  • VC deal flow has no seasonality
  • Ping-pong table sales has no seasonality either
  • It is possible to predict the past (VC deals made) by gathering data about the future (ping-pong tables sold)

Further, the chart proves that one can draw conclusions from a single observation. Here is what the same chart looks like after taking out the 2016 Q1 data point:

Redo_pingpongsales2

This revised chart is also quite informative. I learned:

  • At the same level of ping-pong-table sales (roughly 150 tables), the number of VC deals ranged from 920 to 1020, about one-third of the vertical range shown in the original chart
  • At the same level of VC deals (roughly 1000 deals), the number of ping-pong tables sold ranged from 150 to 230, about half of the horizontal range of the original chart

The many quotes in the WSJ article also tell us that people in Silicon Valley are no more data-driven than people in other parts of the country.


The surprising impact of mixing chart forms

At first glance, this Wall Street Journal chart seems unlikely to impress as it breaks a number of "rules of thumb" frequently espoused by dataviz experts. The inconsistency of mixing a line chart and a dot plot. The overplotting of dots. The ten colors...

Wsj_oilpredict_Feb16

However, I actually like this effort. The discontinuity of chart forms nicely aligns with the split between the actual price movements on the left side and the projections on the right side.

The designer also meticulously placed the axis labels with monthly labels for actual price movements and quarterly labels for projections.

Even the ten colors are surprisingly manageable. I am not sure we need to label all those banks; maybe just the ones at the extremes. If we clear out some of these labels, we can make room for a median line.

***

How good are these oil price predictions? It is striking that every bank shown is predicting that oil prices have hit a bottom, and will start recovering in the next few quarters. Contrast this with the left side of the chart, where the line is basically just tumbling down.

Step back six months earlier, to September 2015. The same chart looks like this:

  Wsj_oilpredict_sept15

 Again, these analysts were calling a bottom in prices and predicting a steady rise over the next quarters.

The track record of these oil predictions is poor:

Wsj_oilpredict_sep15_evaluated

The median analyst predicted oil prices to reach $50 by Q1 of 2016. Instead, prices fell to $30.

Given this track record, it's shocking that these predictions are considered newsworthy. One wonders how these predictions are generated, and how did the analysts justify ignoring the prevailing trend.


First ask the right question: the data scientist edition

A reader didn't like this graphic in the Wall Street Journal:

Wsj_datascientist_timeofday

One could turn every panel into a bar chart but unfortunately, the situation does not improve much. Some charts just can't be fixed by altering the visual design.

The chart is frustrating to read: typically, colors are used to signify objects that should be compared. Focus on the brown wedges for a moment: Basic EDA 46%, Data cleaning 31%, Machine learning 27%, etc. Those are proportions of respondents who said they spent 1 to 3 hours a day on the respective tasks. That is one weird way of describing time use. The people who spent 1 to 3 hours a day on EDA do not necessarily overlap with those who spent 1 to 3 hours a day on data cleaning. In addition, there is no summation formula that lets us know how any individual, or the average data scientist, spends his or her time during a typical day.

***

But none of this is the graphics designer's fault.

The trouble with the chart is in the D corner of the Trifecta checkup. The survey question was poorly posed. The data came from a study by O'Reilly Media. They asked questions of this form:

How much time did you spend on basic exploratory data analysis on average?

A. Less than 1 hour a week
B. 1 to 4 hours a week
C. 1 to 3 hours a day
D. 4 or more hours a day

It is not obvious that those four levels are mutually exhaustive. In fact, they aren't. One hour a day for five working days is a total of 5 hours a week. Those who spent between 4 and 5 hours a week have nowhere to go.

Further, if one had access to individual responses, it's likely that many respondents either worked too many hours or too few hours.

The panels are separate questions which bear no relationship to each other, even though the tasks are clearly related by the fact that there are only so many working hours in a day.

To fix this chart, one must first fix the data. To fix the data, one must ask the right questions.

 

 


A quick lesson in handling more than one messages on one chart

Between teaching two classes, and a seminar, and logging two coast-to-coast flights, I was able to find time to rethink the following chart from the Wall Street Journal: (link to article)

Uk_drinks

I like the right side of this chart, which helps readers interpret what the alcohol consumption guidelines really mean. When we go out and drink, we order beers, or wine, or drinks - we don't think in terms of grams of alcohol.

The left side is a bit clumsy. The biggest message is that the UK has tightened its guidelines. This message is delivered by having U.K. appear twice in the chart, the only country to repeat. In order to make this clear, the designer highlights the U.K. rows. But the style of highlighting used for the two rows differs, because the current U.K. row has to point to the right side, but not the previous U.K. row. This creates a bit of confusion.

In addition, since the U.K. rows are far apart, figuring out how much the guidelines have changed is more work than desired.

The placement of the bars by gender also doesn't help. A side message is that most countries allow men to drink more than women but the U.K., in revising its guidelines, has followed Netherlands and Guyana in having the same level for both genders.

***

After trying a few ideas, I think the scatter plot works out pretty well. One advantage is that it does not arbitrarily order the data men first, women second as in the original chart. Another advantage is that it shows the male-female balance more clearly.

Redo_ukalcohol_2

An afterthought: I should have added the words "Stricter", "Laxer" on the two corners of the chart. This chart shows both the U.K. getting stricter but also that it joins Guyana and Netherlands as countries which treat men and women equally when it comes to drinking.

 

 


Efficiency in space usage leads to efficiency in comprehension

Consider the following two charts that illustrate the same data. (I deliberately took out the header text to make a point. The original chart came from the Wall Street Journal.)

Redo_luxurystoresbycountry

To me, the line chart gets to the point more quickly: that Burberry stores are more numerous in those places shown on the left and fewer in those places shown on the right, relative to comparable luxury brands (Prada and Louis Vuitton).

The reason why the tiled bar chart is tougher to decipher is its inefficient use of space. Within each country group,  the three places are plotted on two levels, one on the upper level, and two on the lower level. Then the two groups of countries are placed top and bottom. Readers have to first size up the individual group of three countries, then make a comparison between the two groups.

***

From a Trifecta checkup perspective, the bigger issue here is the data. The full story seems to be that those two country groups have different currency experiences... Japan and the continental European countries have weakening currencies, which tends to make their goods cheaper for Chinese consumers. This crucial part of the story is not anywhere on the chart.

In addition, the number of stores is not a telling statistic, because stores may have different areas, and certainly the revenues generated by these stores differ, potentially by country. A measure such as change in same-store sales in each country is more informative.

It is also not true that the distribution of stores is purely a matter of business strategy, as Burberry is a British brand, Prada is Italian and Louis Vuitton is French. They each have more stores in their home countries, which seems very logical.


More chart drama, and data aggregation

Robert Kosara posted a response to my previous post.

He raises an important issue in data visualization - the need to aggregate data, and not plot raw data. I have no objection to that point.

What was shown in my original post are two extremes. The bubble chart is high drama at the expense of data integrity. Readers cannot learn any of the following from that chart:

  • the shape of the growth and subsequent decline of the flu epidemic
  • the beginning and ending date of the epidemic
  • the peak of the epidemic*

* The peak can be inferred from the data label, although there appears to be at least one other circle of approximately equal size, which isn't labeled.

The column chart is low drama but high data integrity. To retain some dramatic element, I encoded the data redundantly in the color scale. I also emulated the original chart in labeling specific spikes.

The designer then simply has to choose a position along these two extremes. This will involve some smoothing or aggregation of the data. Robert showed a column chart that has weekly aggregates, and in his view, his version is closer to the bubble chart.

Robert's version indeed strikes a balance between drama and data integrity, and I am in favor of it. Here is the idea (I am responsible for the added color).

Kosara_avianflu2

***

Where I depart from Robert is how one reads a column chart such as the one I posted:

Redo_avianflu2

Robert thinks that readers will perceive each individual line separately, and in so doing, "details hide the story". When I look at a chart like this, I am drawn to the envelope of the columns. The lighter colors are chosen for the smaller spikes to push them into the background. What might be the problem are those data labels identifying specific spikes; they are a holdover from the original chart--I actually don't know why those specific dates are labeled.

***

In summary, the key takeaway is, as Robert puts it:

the point of this [dataset] is really not about individual days, it’s about the grand totals and the speed with which the outbreak happened.

We both agree that the weekly version is the best among these. I don't see how the reader can figure out grand totals and speed with which the outbreak happened by staring at those dramatic but overlapping bubbles.


Is it worth the drama?

Quite the eye-catching chart this:

Wsj_avianflu

The original accompanied this article in the Wall Street Journal about avian flu outbreaks in the U.S.

The point of the chart appears to be the peak in the flu season around May. The overlapping bubbles were probably used for drama.

A column chart, with appropriate colors, attains much of the drama but retains the ability to read the data.

Redo_avianflu2