Steel tariffs, and my new dataviz seminar

I am developing a new seminar aimed at business professionals who want to improve their ability to communicate using charts. I want any guidance to be tool-agnostic, so that attendees can implement them using Excel if that’s their main charting software. Over the 12+ years that I’ve been blogging, certain ideas keep popping up; and I have collected these motifs and organized them for the seminar. This post is about a recent chart that brings up a few of these motifs.

This chart has been making the rounds in articles about the steel tariffs.

2018.03.08steel_1

The chart shows the Top 10 nations that sell steel to the U.S., which together account for 78% of all imports. 

The chart shows a few signs of design. These things caught my eye:

  1. the pie chart on the left delivers the top-line message that 10 countries account for almost 80% of all U.S. steel imports
  2. the callout gives further information about which 10 countries and how much each nation sells to the U.S. This is a nice use of layering
  3. on the right side, progressive tints of blue indicate the respective volumes of imports

On the negative side of the ledger, the chart is marred by three small problems. Each of these problems concerns inconsistency, which creates confusion for readers.

  1. Inconsistent use of color: on the left side, the darker blue indicates lower volume while on the right side, the darker blue indicates higher volume
  2. Inconsistent coding of pie slices: on the right side, the percentages add up to 78% while the total area of the pie is 100%
  3. Inconsistent scales: the left chart carrying the top-line message is notably smaller than the right chart depicting the secondary message. Readers’ first impression is drawn to the right chart.

Easy fixes lead to the following chart:

Redo_steelimports_1

***

The central idea of the new dataviz seminar is that there are many easy fixes that are often missed by the vast majority of people making Excel charts. I will present a stack of these motifs. If you're in the St. Louis area, you get to experience the seminar first. Register for a spot here.

Send this message to your friends and coworkers in the area. Also, contact me if you'd like to bring this seminar to your area.

***

I also tried the following design, which brings out some other interesting tidbits, such as that Canada and Brazil together sell the U.S. about 30% of its imported steel, the top 4 importers account for about 50% of all steel imports, etc. Color is introduced on the chart via a stylized flag coloring.

Redo_steelimports_2

 

 

 

 

 


A gem among the snowpack of Olympics data journalism

It's not often I come across a piece of data journalism that pleases me so much. Here it is, the "Happy 700" article by Washington Post is amazing.

Wpost_happy700_map2

 

When data journalism and dataviz are done right, the designers have made good decisions. Here are some of the key elements that make this article work:

(1) Unique

The topic is timely but timeliness heightens both the demand and supply of articles, which means only the unique and relevant pieces get the readers' attention.

(2) Fun

The tone is light-hearted. It's a fun read. A little bit informative - when they describe the towns that few have heard of. The notion is slightly silly but the reader won't care.

(3) Data

It's always a challenge to make data come alive, and these authors succeeded. Most of the data work involves finding, collecting and processing the data. There isn't any sophisticated analysis. But a powerful demonstration that complex analysis is not always necessary.

(4) Organization

The structure of the data is three criteria (elevation, population, and terrain) by cities. A typical way of showing such data might be an annotated table, or a Bumps-type chart, grouped columns, and so on. All these formats try to stuff the entire dataset onto one chart. The designers chose to highlight one variable at a time, cumulatively, on three separate maps. This presentation fits perfectly with the flow of the writing. 

(5) Details

The execution involves some smart choices. I am a big fan of legend/axis labels that are informative, for example, note that the legend doesn't say "Elevation in Meters":

Wpost_happy700_legend

The color scheme across all three maps shows a keen awareness of background/foreground concerns. 


Two nice examples of interactivity

Janie on Twitter pointed me to this South China Morning Post graphic showing off the mighty train line just launched between north China and London (!)

Scmp_chinalondonrail

Scrolling down the page simulates the train ride from origin to destination. Pictures of key regions are shown on the left column, as well as some statistics and other related information.

The interactivity has a clear purpose: facilitating cross-reference between two chart forms.

The graphic contains a little oversight ... The label for the key city of Xian, referenced on the map, is missing from the elevation chart on the left here:

Scmp_chinalondonrail_xian

 ***

I also like the way New York Times handled interactivity to this chart showing the rise in global surface temperature since the 1900s. The accompanying article is here.

Nyt_surfacetemp

When the graph is loaded, the dots get printed from left to right. That's an attention grabber.

Further, when the dots settle, some years sink into the background, leaving the orange dots that show the years without the El Nino effect. The reader can use the toggle under the chart title to view all of the years.

This configuration is unusual. It's more common to show all the data, and allow readers to toggle between subsets of the data. By inverting this convention, it's likely few readers need to hit that toggle. The key message of the story concerns the years without El Nino, and that's where the graphic stands.

This is interactivity that succeeds by not getting in the way. 

 

 

 


Let's not mix these polarized voters as the medians run away from one another

Long-time follower Daniel L. sent in a gem, by the Washington Post. This is a multi-part story about the polarization of American voters, nicely laid out, with superior analyses and some interesting graphics. Click here to see the entire article.

Today's post focuses on the first graphic. This one:

Wpost_friendsparties1

The key messages are written out on the 2017 charts: namely, 95% of Republicans are more conservative than the median Democrat, and 97% of Democrats are more libearl than the median Republicans.

This is a nice statistical way of laying out the polarization. There are a number of additional insights one can draw from the population distributions: for example, in the bottom row, the Democrats have been moving left consistently, and decisively in 2017. By contrast, Republicans moved decisively to the right from 2004 to 2017. I recall reading about polarization in past elections but it is really shocking to see the extreme in 2017.

A really astounding but hidden feature is that the median Democrat and the median Republican were not too far apart in 1994 and 2004 but the gap exploded in 2017.

***

I like to solve a few minor problems on this graphic. It's a bit confusing to have each chart display information on both Republican and Democratic distributions. The reader has to understand that in the top row, the red area represents Republican voters but the blue line shows the median Democrat.

Also, I want to surface two key insights: the huge divide that developed in 2017, and the exploding gap between the two medians.

Here is the revised graphic:

  Redo_wpost_friendsparties1

On the left side, each chart focuses on one party, and the trend over the three elections. The reader can cross charts to discover that the median voter in one party is more extreme than essentially all of the voters of the other party. This same conclusion can be drawn from the exploding gap between the median voters in either party, which is explicitly plotted in the lower right chart. The top right chart is a pretty visualization of how polarized the country was in the 2017 election.

 


Some like it packed, some like it piled, and some like it wrapped

In addition to Xan's "packed bars" (which I discussed here), there are some related efforts to improve upon the treemap. To recap, treemap is a design to show parts against the whole, and it works by packing rectangles into the bounding box. Frequently, this leads to odd-shaped rectangles, e.g. really thin and really tall ones, and it asks readers to estimate relative areas of differently-scaled boxes. We often make mistakes in this task.

The packed bar chart approaches this challenge by allowing only the width of the box to vary with the data. The height of every box is identical, so readers only have to compare lengths.

Via Twitter, Adil pointed me to this article by him and his collaborators that describes a few alternatives.

One of the options is the "wrapped bar chart" introduced by Stephen Few. Like Xan, he also restricts the variation to legnths of bars while keeping the heights fixed. But he goes further, and abandons packing completely. Instead of packing, Few wraps the bars. Start with a large bar chart with many categories filling up a tall plotting area. He then divides the bars into different blocks and place them side by side. Here is an example showing 50 states, ranked by total electoral votes:

Umd_few_wrapped_bars

You can see the white space because there is no packing. This version makes it easier to see the relative importance of the different blocks of states but it is tough to tell how much the first block of 13 states accounts for. The wrapped barchart is organized similar to a small multiples, except that the scale in each panel is allowed to vary.

Another option is the "piled bars." This option, presented by Yalçın, Elmqvist, and Bederson, brings packing back. But unlike the packed bars or the treemap, the outside envelope no longer represents the total amount. In the "piled bars" design, the top X categories act as the canvas, and the smaller categories are packed inside these bars rather than around them. Take a look at this example, which plots GDP growth of different countries:

Umd_piledbars

 The inset on the left column is instructive. The green (smallest) and red (medium) bars are packed inside the blue (largest) bars. In this example, it doesn't make sense to add up GDP growth rates, so it doesn't matter that the outer envelope does not equal the total. It would not work as well with the electoral vote data in the previous example.

I wonder whether a piled dot plot works better than a piled bar chart. This piled bar chart shares a problem with the stacked area chart, which is that other than the first piece, all the other pieces represent the differences between the respective data and the next lower category, rather than the value of the data point. Readers are led to compare the green, red and blue pieces but the corresponding values are not truly comparable, or of primary interest.

This problem goes away if the bars are represented by dots.

***

What strikes me as the most key paragraph in the Yalcin, et. al.'s article is the following:

To understand graphical perception performance, we studied three basic tasks:

1) How accurately can we estimate the difference between two data points?
2) How accurately can we estimate the rank of a data point among all the rest?
3) How accurately can we guess the distribution characteristic of the whole dataset?

As a chart designer, we have to prioritize these tasks. There is unlikely to be a single chart form that will prevail on all three tasks. So if the designer starts with the question that he or she wants to address, that leads to the key task that the visualization should enable, which leads to the chart form that facilitates that task the best.

 

 

 


Sorting out what's meaningful and what's not

A few weeks ago, the New York Times Upshot team published a set of charts exploring the relationship between school quality, home prices and commute times in different regions of the country. The following is the chart for the New York/New Jersey region. (The article and complete data visualization is here.)

Nyt_goodschoolsaffordablehomes_nyc

This chart is primarily a scatter plot of home prices against school quality, which is represented by average test scores. The designer wants to explore the decision to live in the so-called central city versus the decision to live in the suburbs, hence the centering of the chart about New York City. Further, the colors of the dots represent the average commute times, which are divided into two broad categories (under/over 30 minutes). The dots also have different sizes, which I presume measures the populations of each district (but there is no legend for this).

This data visualization has generated some negative reviews, and so has the underlying analysis. In a related post on the sister blog, I discuss the underlying statistical issues. For this post, I focus on the data visualization.

***

One positive about this chart is the designer has a very focused question in mind - the choice between living in the central city or living in the suburbs. The line scatter has the effect of highlighting this particular question.

Boy, those lines are puzzling.

Each line connects New York City to a specific school district. The slope of the line is, nominally, the trade-off between home price and school quality. The slope is the change in home prices for each unit shift in school quality. But these lines don't really measure that tradeoff because the slopes span too wide a range.

The average person should have a relatively fixed home-price-to-school-quality trade-off. If we could estimate this average trade-off, it should be represented by a single slope (with a small cone of error around it). The wide range of slopes actually undermines this chart, as it demonstrates that there are many other variables that factor into the decision. Other factors are causing the average trade-off coefficient to vary so widely.

***

The line scatter is confusing for a different reason. It reminds readers of a flight route map. For example:

BA_NYC_Flight_Map

The first instinct may be to interpret the locations on the home-price-school-quality plot as geographical. Such misinterpretation is reinforced by the third factor being commute time.

Additionally, on an interactive chart, it is typical to hide the data labels behind mouseovers or clicks. I like the fact that the designer identifies some interesting locales by name without requiring a click. However, one slight oversight is the absence of data labels for NYC. There is nothing to click on to reveal the commute/population/etc. data for central cities.

***

In the sister blog post, I mentioned another difficulty - most of the neighborhoods are situated to the right and below New York City, challenging the notion of a "trade-off" between home price and school quality. It appears as if most people can spend less on housing and also send kids to better schools by moving out of NYC.

In the New York region, commute times may be the stronger factor relative to school quality. Perhaps families chose NYC because they value shorter commute times more than better school quality. Or, perhaps the improvement in school quality is not sufficient to overcome the negative of a much longer commute. The effect of commute times is hard to discern on the scatter plot as it is coded into the colors.

***

A more subtle issue can be seen when comparing San Francisco and Boston regions:

Nyt_goodschoolsaffordablehomes_sfobos

One key insight is that San Francisco homes are on average twice as expensive as Boston homes. Also, the variability of home prices is much higher in San Francisco. By using the same vertical scale on both charts, the designer makes this insight clear.

But what about the horizontal scale? There isn't any explanation of this grade-level scale. It appears that the central cities have close to average grade level in each chart so it seems that each region is individually centered. Otherwise, I'd expect to see more variability in the horizontal dots across regions.

If one scale is fixed across regions, and the other scale is adapted to each region, then we shouldn't compare the slopes across regions. The fact that the lines are generally steeper in the San Francisco chart may be an artifact of the way the scales are treated.

***

Finally, I'd recommend aggregating the data, and not plot individual school districts. The obsession with magnifying little details is a Big Data disease. On a chart like this, users are encouraged to click on individual districts and make inferences. However, as I discussed in the sister blog (link), most of the differences in school quality shown on these charts are not statistically meaningful (whereas the differences on the home-price scale are definitely notable). 

***

If you haven't already, see this related post on my sister blog for a discussion of the data analysis.

 

 

 

 


Your charts need the gift of purpose

Via Twitter, I received this chart:

Wp_favorability_overall

My readers are nailing it when it comes to finding charts that deserve close study. On Twitter, the conversation revolved around the inversion of the horizontal axis. Favorability is associated with positive numbers, and unfavorability with negative numbers, and so, it seems the natural ordering should be to place Favorable on the right and Unfavorable on the left.

Ordinarily, I'd have a problem with the inversion but here, the designer used the red-orange color scheme to overcome the potential misconception. It's hard to imagine that orange would be the color of disapproval, and red, of approval!

I am more concerned about a different source of confusion. Take a look at the following excerpt:

Wp_favorability_overall inset

If you had to guess, what are the four levels of favorability? Using the same positive-negative scale discussed above, most of us will assume that going left to right, we are looking at Strongly Favorable, Favorable, Unfavorable, Strongly Unfavorable. The people in the middle are neutrals and the people on the edages are extremists.

But we'd be mistaken. The order going left to right is Favorable, Strongly Favorable, Strongly Unfavorable, Unfavorable. The designer again used tints and shades to counter our pre-conception. This is less successful because the order defies logic. It is a double inversion.

The other part of the chart I'd draw attention to is the column of data printed on the right. Each such column is an act of giving up - the designer admits he or she couldn't find a way to incorporate that data into the chart itself. It's like a footnote in a book. The problem arises because such a column frequently contains very important information. On this chart, the data are "net favorable" ratings, the proportion of Favorables minus the proportion of Unfavorables, or visually, the length of the orange bar minus the length of the red bar.

The net rating is a succinct way to summarize the average sentiment of the population. But it's been banished to a footnote.

***

Anyone who follows American politics a little in recent years recognizes the worsening polarization of opinions. A chart showing the population average is thus rather meaningless. I'd like to see the above chart broken up by party affiliation (Republican, Independent, Democrat).

This led me to the original source of the chart. It turns out that the data came from a Fox News poll but the chart was not produced by Fox News - it accompanied this Washington Post article. Further, the article contains three other charts, broken out by party affiliation, as I hoped. The headline of the article was "Bernie Sanders remains one of the most popular politicians..."

But reading three charts, printed vertically, is not the simplest matter. One way to make it easier is to gift the chart a purpose. It turns out there are no surprises among the Republican and Democratic voters - they are as polarized as one can imagine. So the real interesting question in this data is the orientation of the Independent voters - are they more likely to side with Democrats or Republicans?

Good house-keeping means when you acquire stuff, you must remove other stuff. After adding the party dimension, it makes more sense to collapse the favorability dimension - precisely by using the net favorable rating column:

Redo_wp_favorability_chart

 

 

 


Much more to do after selecting a chart form

Brady_ryan_heads

I sketched out this blog post right before the Superbowl - and was really worked up as I happened to be flying into Atlanta right after they won (well, according to any of our favorite "prediction engines," the Falcons had 95%+ chance of winning it all a minute from the end of the 4th quarter!) What I'd give to be in the SuperBowl-winning city the day after the victory!

Maybe next year. I didn't feel like publishing about SuperBowl graphics when the wound was so very raw. But now is the moment.

The following chart came from Orange County Register on the run-up to the Superbowl. (The bobble-head quarterbacks also came from OCR). The original article is here.

Ocr_patriots_atlanta

The choice of a set of dot plots is inspired. The dot plot is one of those under-utilized chart types - for comparing two or three objects along a series of metrics, it has to be one of the most effective charts.

To understand this type of design, readers have to collect three pieces of information: first is to recognize the dot symbols, which color or shape represents which object being compared; second is to understand the direction of the axis; third is to recognize that the distance between the paired dots encodes the amount of difference between the two objects.

The first task is easy enough here as red stands for Atlanta and blue for New England - those being the team colors.

The second task is deceptively simple. It appears that a ranking scale is used for all metrics with the top ("1st") shown on the left side and the bottom ("32nd") shown on the right. Thus, all 32 teams in the NFL are lined up left to right (i.e. best to worst).

Now, focus your attention on the "Interceptions Caught" metric, third row from the bottom. The designer indicated "Fewest" on the left and "Most" on the right. For those who don't know American football, an "interception caught" is a good defensive play; it means your defensive player grabs a ball thrown by the opposing team (usually their quarterback), causing a turnover. Therefore, the more interceptions caught, the better your defence is playing.

Glancing back at the chart, you learn that on the "Interceptions Caught" metric, the worst team is shown on the left while the best team is shown on the right. The same reversal happened with "Fumbles Lost" (fewest is best), "Penalties" (fewest is best), and "Points Allowed per Game" (fewest is best). For four of nine metrics, right is best while for the other five, left is best.

The third task is the most complicated. A ranking scale always has the weakness that a gap of one rank does not yield information on how important the gap is. It's a complicated decision to select what type of scale to use in a chart like this, and in this post, I shall ignore this issue, and focus on a visual makeover.

***

I find the nine arrays of 32 squares, essentially the grid system, much too insistent, elevating information that belongs to the background. So one of the first fixes is to soften the grid system, and the labeling of the axes.

In addition, given the meaningless nature of the rank number (as mentioned above), I removed those numbers and used team logos instead. The locations on the axes are sufficient to convey the relative ranks of the two teams against the field of 32.

Redo_newenglandatlanta1

Most importantly, the directions of all metrics are now oriented in such a way that moving left is always getting better.

***

While using logos for sports teams is natural, I ended up replacing those, as the size of the dots is such that the logos are illegible anyway.

The above makeover retains the original order of metrics. But to help readers address the key question of this chart - which team is better, the designer should arrange the metrics in a more helpful way. For example, in the following version, the metrics are subdivided into three sections: the ones for which New England is significantly better, the ones for which Atlanta is much better, and the rest for which both teams are competitive with each other.

Redo_newenglandatlanta2

In the Trifecta checkup (link), I speak of the need to align your visual choices with the question you are trying to address with the chart. This is a nice case study of strengthening that Q-V alignment.

 

 

 

 

 

 


Involuntary head-shaking is probably not an intended consequence of data visualization

This chart is in the Sept/Oct edition of Harvard Magazine:

Naep scores - Nov 29 2016 - 4-21 PM

Pretty standard fare. It even is Tufte-sque in the sparing use of axes, labels, and other non-data-ink.

Does it bug you how much work you need to do to understand this chart?

Here is the junkchart version:

Redo_2016naep_v2

In the accompanying article, the journalist declared that student progress on NAEP tests came to a virtual standstill, and this version highlights the drop in performance between the two periods, as measured by these "gain scores."

The clarity is achieved through proximity as well as slopes.

The column chart form has a number of deficiencies when used to illustrate this data. It requires too many colors. It induces involuntary head-shaking.

Most unforgivingly, it leaves us with a puzzle: does the absence of a column means no progress or unknown?

Inset_2016naep

PS. The inclusion of 2009 on both time periods is probably an editorial oversight.

 

 


An example of focusing the chart on a message

Via Jimmy Atkinson on Twitter, I am alerted to this chart from the Wall Street Journal.

Wsj_fiscalconstraints

The title of the article is "Fiscal Constraints Await the Next President." The key message is that "the next president looks to inherit a particularly dismal set of fiscal circumstances." Josh Zumbrun, who tipped Jimmy about this chart on Twitter, said that it is worth spending time on.

I like the concept of the chart, which juxtaposes the economic condition that faced each president at inauguration, and how his performance measured against expectation, as represented by CBO predictions.

The top portion of the graphic did require significant time to digest:

Wsj_fiscalconstraints_top

A glance at the sidebar informs me that there are two scenarios being depicted, the CBO projections and the actual deficit-to-GDP ratios. Then I got confused on several fronts.

One can of course blame the reader (me) for mis-reading the chart but I think dataviz faces a "the reader is always right" situation -- although there can be multiple types of readers for a given graphic so maybe it should say "the readers are always right."

I kept lapsing into thinking that the bold lines (in red and blue) are actual values while the gray line/area represents the predictions. That's because in most financial charts, the actual numbers are in the foreground and the predictions act as background reference materials. But in this rendering, it's the opposite.

For a while, a battle was raging in my head. There are a few clues that the bold red/blue lines cannot represent actual values. For one thing, I don't recall Reagan as a surplus miracle worker. Also, some of the time periods overlap, and one assumes that the CBO issued one projection only at a given time. The Obama line also confused me as the headline led me to expect an ugly deficit but the blue line is rather shallow.

Then, I got even more confused by the units on the vertical axis. According to the sidebar, the metric is deficit-to-GDP ratio. The majority of the line live in the negative territory. Does the negative of the negative imply positive? Could the sharp upward turn of the Reagan line indicate massive deficit spending? Or maybe the axis should be relabelled surplus-to-GDP ratio?

***

As I proceeded to re-create this graphic, I noticed that some of the tick marks are misaligned. There are various inconsistencies related to the start of each projection, the duration of the projection, the matching between the boxes and the lines, etc. So the data in my version is just roughly accurate.

To me, this data provide a primary reference to how presidents perform on the surplus/deficit compared to expectations as established by the CBO projections.

Redo_wsj_deficitratios

I decided to only plot the actual surplus/deficit ratios for the duration of each president's tenure. The start of each projection line is the year in which the projection is made (as per the original). We can see the huge gap in every case. Either the CBO analysts are very bad at projections, or the presidents didn't do what they promised during the elections.