NYT hits the trifecta with this market correction chart

Yesterday, in the front page of the Business section, the New York Times published a pair of charts that perfectly captures the story of the ongoing turbulence in the stock market.

Here is the first chart:

Nyt_marketcorrection_1

Most market observers are very concerned about the S&P entering "correction" territory, which the industry arbitrarily defines as a drop of 10% or more from a peak. This corresponds to the shortest line on the above chart.

The chart promotes a longer-term reflection on the recent turbulence, using two reference points: the index has returned to the level even with that at the start of 2018, and about 16 percent higher since the beginning of 2017.

This is all done tastefully in a clear, understandable graphic.

Then, in a bit of a rhetorical flourish, the bottom of the page makes another point:

Myt_marketcorrection2

When viewed back to a 10-year period, this chart shows that the S&P has exploded by 300% since 2009.

A connection is made between the two charts via the color of the lines, plus the simple, effective annotation "Chart above".

The second chart adds even more context, through vertical bands indicating previous corrections (drops of at least 10%). These moments are connected to the first graphic via the beige color. The extra material conveys the message that the market has survived multiple corrections during this long bull period.

Together, the pair of charts addresses a pressing current issue, and presents a direct, insightful answer in a simple, effective visual design, so it hits the Trifecta!

***

There are a couple of interesting challenges related to connecting plots within a multiple-plot framework.

While the beige color connects the concept of "market correction" in the top and bottom charts, it can also be a source of confusion. The orientation and the visual interpretation of those bands differ. The first chart uses one horizontal band while the chart below shows multiple vertical bands. In the first chart, the horizontal band refers to a definition of correction while in the second chart, the vertical bands indicate experienced corrections.

Is there a solution in which the bands have the same orientation and same meaning?

***

These graphs solve a visual problem concerning the visualization of growth over time. Growth rates are anchored to some starting time. A ten-percent reduction means nothing unless you are told ten-percent of what.

Using different starting times as reference points, one gets different values of growth rates. With highly variable series of data like stock prices, picking starting times even a day apart can lead to vastly different growth rates.

The designer here picked several obvious reference times, and superimposes multiple lines on the same plotting canvass. Instead of having four lines on one chart, we have three lines on one, and four lines on the other. This limits the number of messages per chart, which speeds up cognition.

The first chart depicts this visual challenge well. Look at the start of 2018. This second line appears as if you can just reset the start point to 0, and drag the remaining portion of the line down. The part of the top line (to the right of Jan 2018) looks just like the second line that starts at Jan 2018.

Jc_marketcorrection1

However, a closer look reveals that the shape may be the same but the magnitude isn't. There is a subtle re-scaling in addition to the re-set to zero.

The same thing happens at the starting moment of the third line. You can't just drag the portion of the first or second line down - there is also a needed re-scaling.


Using a bardot chart for survey data

Aleks J. wasn't amused by the graphs included in Verge's report about user attitudes toward the major Web brands such as Google, Facebook, and Twitter.

Let's use this one as an example:

Verge_survey_fb

Survey respondents are asked to rate how much they like or dislike the products and services from each of six companies, on a five-point scale. There is a sixth category for "No opinion/Don't use."

In making this set of charts, the designer uses six different colors for the six categories. This means he/she thinks of these categories as discrete so that the difference between categories carries no meaning. In a bipolar, five-point scale, it is more common to pick two extreme colors and then use shades to indicate the degree of liking or disliking. The middle category can be shown in a neutral color to express the neutrality of opinion.

The color choice baffles me. The two most prominent colors, gray and dark blue, correspond to two minor categories (no opinion and neutral) while the most important category - "greatly like" - is painted the modest yellow, paling away.

Verge sees the popularity of Facebook as the key message, which explains its top position among the six brands. However, readers familar with the stacked bar chart form are likely looking to make sense of the order, and frustrated.

***

In revising this chart, I introduce a second level of grouping: the six categories fit into three color groups: red for dislike, gray for no opinion/neutral, and orange for like. The like and dislike groups are plotted at the left and right ends of the chart while the two less informative categories are lumped toward the middle.

Redo_vergesurveyfb_1

I take great pleasure in dumping the legend box.

***

Now, when a five-point scale is used, many analysts like to analyze the Top 2, or Bottom 2 boxes. The choice of colors in the above chart facilitates this analysis. Adding some subtle dots makes it even better!

Redo_vergesurveyfb_2

Because this chart is a superposition of a stacked bar chart and a dot plot, I am calling this a bardot chart.

Also notice that the brands are re-ordered by Top 2 box popularity.

 

 


Let's not mix these polarized voters as the medians run away from one another

Long-time follower Daniel L. sent in a gem, by the Washington Post. This is a multi-part story about the polarization of American voters, nicely laid out, with superior analyses and some interesting graphics. Click here to see the entire article.

Today's post focuses on the first graphic. This one:

Wpost_friendsparties1

The key messages are written out on the 2017 charts: namely, 95% of Republicans are more conservative than the median Democrat, and 97% of Democrats are more libearl than the median Republicans.

This is a nice statistical way of laying out the polarization. There are a number of additional insights one can draw from the population distributions: for example, in the bottom row, the Democrats have been moving left consistently, and decisively in 2017. By contrast, Republicans moved decisively to the right from 2004 to 2017. I recall reading about polarization in past elections but it is really shocking to see the extreme in 2017.

A really astounding but hidden feature is that the median Democrat and the median Republican were not too far apart in 1994 and 2004 but the gap exploded in 2017.

***

I like to solve a few minor problems on this graphic. It's a bit confusing to have each chart display information on both Republican and Democratic distributions. The reader has to understand that in the top row, the red area represents Republican voters but the blue line shows the median Democrat.

Also, I want to surface two key insights: the huge divide that developed in 2017, and the exploding gap between the two medians.

Here is the revised graphic:

  Redo_wpost_friendsparties1

On the left side, each chart focuses on one party, and the trend over the three elections. The reader can cross charts to discover that the median voter in one party is more extreme than essentially all of the voters of the other party. This same conclusion can be drawn from the exploding gap between the median voters in either party, which is explicitly plotted in the lower right chart. The top right chart is a pretty visualization of how polarized the country was in the 2017 election.

 


Is this chart rotten?

Some students pointed me to a FiveThirtyEight article about Rotten Tomatoes scores that contain the following chart: (link to original)

Hickey-rtcurve-3

This is a chart that makes my head spin. Too much is going on, and all the variables in the plot are tangled with each other. Even after looking at it for a while, I still don't understand how the author looked at the above and drew this conclusion:

"Movies that end up in the top tier miss a step ahead of their release, mediocre movies stumble, and the bottom tiers fall down an elevator shaft."

(Here is the article. It's a great concept but a bit disappointing analysis coming from Nate Silver's site. I have written features for them before so I know they ask good questions. Maybe they should apply the same level of rigor in editing feature writers to editing staff writers.)


Egregious chart brings back bad memories

My friend Alberto Cairo said it best: if you see bullshit, say "bullshit!"

He was very incensed by this egregious "infographic": (link to his post)

Aul_vs_pp

Emily Schuch provided a re-visualization:

Emilyschuch_pp

The new version provides a much richer story of how Planned Parenthood has shifted priorities over the last few years.

It also exposed what the AUL (American United for Life) organization distorted the story.

The designer extracted only two of the lines, thus readers do not see that the category of services that has really replaced the loss of cancer screening was STI/STD testing and treatment. This is a bit ironic given the other story that has circulated this week - the big jump in STD among Americans (link).

Then, the designer placed the two lines on dual axes, which is a dead giveaway that something awful lies beneath.

Further, this designer dumped the data from intervening years, and drew a straight line from the first to the last year. The straight arrow misleads by pretending that there has been a linear trend, and that it would go on forever.

But the masterstroke is in the treatment of the axes. Let's look at the axes, one at a time:

The horizontal axis: Let me recap. The designer dumped all but the starting and ending years, and drew a straight line between the endpoints. While the data are no longer there, the axis labels are retained. So, our attention is drawn to an area of the chart that is void of data.

The vertical axes: Let me recap. The designer has two series of data with the same units (number of people served) and decided to plot each series on a different scale with dual axes. But readers are not supposed to notice the scales, so they do not show up on the chart.

To summarize, where there are no data, we have a set of functionless labels; where labels are needed to differentiate the scales, we have no axes.

***

This is a tried-and-true tactic employed by propagandists. The egregious chart brings back some bad memories.

Here is a long-ago post on dual axes.

Here is Thomas Friedman's use of the same trick.


Reimagining the league table

The reason for the infrequent posting is my travel schedule. I spent the past week in Seattle at JSM. This is an annual meeting of statisticians. I presented some work on fantasy football data that I started while writing Numbersense.

For my talk, I wanted to present the ubiquitous league table in a more useful way. The league table is a table of results and relevant statistics, at the team level, in a given sports league, usually ordered by the current winning percentage. Here is an example of ESPN's presentation of the NFL end-of-season league table from 2014.

Espn_league_table_nfl_2014

If you want to know weekly results, you have to scroll to each team's section, and look at this format:

Espn_cowboys_2014_team

For the graph that I envisioned for the talk,  I wanted to show the correlation between Points Scored and winning/losing. Needless to say, the existing format is not satisfactory. This format is especially poor if I want my readers to be able to compare across teams.

***

The graph that I ended up using is this one:

  All_teams_season_winloss_vs_points

 The teams are sorted by winning percentage. One thing should be pretty clear... the raw Points Scored are only weakly associated with winning percentage. Especially in the middle of the Points distribution, other factors are at play determining if the team wins or loses.

The overlapping dots present a bit of a challenge. I went through a few other drafts before settling on this.

The same chart but with colored dots, and a legend:

Jc_dots_two_layers

Only one line of dots per team instead of two, and also requiring a legend:

Jc_dots_one_line

 Jittering is a popular solution to separating co-located dots but the effect isn't very pleasing to my eye:

Jc_dots_oneline_jittered

Small multiples is another frequently prescribed solution. Here I separated the Wins and Losses in side-by-side panels. The legend can be removed.

Jc_dots_two_panels

 

As usual, sketching is one of the most important skills in data visualization; and you'd want to have a tool that makes sketching painless and quick.


Is something rotten behind there?

Via Twitter, Andrew B. (link) asked if I could comment on the following chart, published by PC Magazine as part of their ISP study. (link)

368348-fastest-isps-2014-usa-overall

 

This chart is decent, although it can certainly be improved. Here is a better version:

Redo_pcmag_isp

A couple of little things are worth pointing out. The choice of red and green to indicate down and up speed respectively is baffling. Red and green are loaded terms which I often avoid. A red dot unfortunately signifies STOP, but ISP users would definitely not want to stop on their broadband superhighway!

In terms of plot symbols, up and down arrows are natural for this data.

***

Using the Trifecta checkup (link), I am most concerned about the D(ata) corner.

The first sign of trouble is the arbitrary construction of an "Index". This index isn't really an index because there is no reference level. The s0-called index is really a weighted average of the download and upload speeds, with 80% weight given to the former. In reality, the download speeds are even weighted higher because download speeds are multiples of the upload speeds, in their original units.

Besides, putting these ISPs side by side gives an impression that they are comparable things. But direct comparison here is an invitation to trouble. For example, Verizon is represented only by its FIOS division (fiber optics). We have Comcast and Cox which are cable providers. The geographical footprints of these providers are also different.

This is not a trivial matter. Midcontinent operates primarily in North and South Dakota. Some other provider may do better than Midcontinent on average but within those two states, the other provider may perform much worse.

***

Note that the data came from the Speedtest website (over 150,000 speed tests). In my OCCAM framework (link), this dataset is Observational, without Controls, seemgingly Complete, and Adapted (from speed testing for technical support).

Here is the author's disclosure, which should cause concern:

We require at least 50 tests from unique IP addresses for any vendor to receive inclusion. That's why, despite a couple of years of operation, we still don't have information on Google Fiber (to name one such vendor). It simply doesn't have enough users who took our test in the past year.

So, the selection of providers is based on the frequency of Speedtest queries. Is that really a good way to select samples? The author presents one possible explanation for why Google Fiber is absent - that it has too few users (without any evidence). In general, there are many reasons for such an absence.  One might be that a provider is so good that few customers complain about speeds and therefore they don't do speed tests. Another might be that a provider has a homegrown tool for measuring speeds. Or any number of other reasons. These reasons create biases in various directions, which makes the analysis confusing.

Think about your own behavior. When was the last time you did a speed test? Did you use Speedtest.com? How did you hear about them? For me, I was pointed to the site by the tech support person at my ISP. Of course, the reason why I called them was that I was experiencing speed issues with my connection.

Given the above, do you think the set of speed measurements used in this study gives us accurate estimates of the speeds delivered by ISPs?

While the research question is well worth answering, and the visual form is passable, it is hard to take the chart seriously because of how this data was collected.

 


Designers fuss over little details and so should you

Those who attended my dataviz talks have seen a version of the following chart that showed up yesterday on New York Times (link):

Arctic_sea_ice

This chart shows the fluctuation in Arctic sea ice volume over time.

The dataset is a simple time series but contains a bit of complexity. There are several ways to display this data that helps readers understand the complex structure. This particular chart should be read at two levels: there is a seasonal pattern that is illustrated by the dotted curve, and then there are annual fluctuations around that average seasonal pattern. Each year's curve is off from the average in one way or another.

The 2015 line (black) is hugging the bottom of the envelope of curves, which means the ice volume is at a historic low.

Meanwhile the lines for 2010-2014 (blue) all trace near the bottom of the historic collection of curves.

***

There are several nice touches on this graphic, such as the ample annotation describing interesting features of the data, the smart use of foreground/background to make comparisons, and the use of countries and states (note the vertical axis labels) to bring alive the measure of coverage volume.

Check out my previous post about this data set.

Also, this post talks about finding real-life anchors to help readers judge size data.

My collection of posts about New York Times graphics.

 

PS. As Mike S. pointed out to me on Twitter, the measure is "ice cover", not ice volume so I edited the wording above. The language here is tricky because we don't usually talk about the "cover" of a country or state so I am using "coverage". The term "surface area" also makes more sense for describing ice than a country.


Three axes or none

Catching up on some older submissions. Reader Nicholas S. saw this mind-boggling chart about Chris Nolan movies when Interstellar came out:

Vulture_chris_nolan_by_numbers

This chart was part of an article by Vulture (link).

It may be the first time I see not one, not two, but three different scales on the same chart.

First we have Rotten Tomatoes score for each movie in proportions:

Vulture_chrisnolan_score

The designer chopped off 49% of each column. So the heights of the columns are not proportional to the data.

Next we see the running time of movies in minutes (dark blue columns):

Vulture_chrisnolan_runtime

For this series, the designer hid 40 minutes worth of each movie below the axis. So again, the heights of the columns do not convey the relative lengths of the movies.

Thirdly, we have light blue columns representing box office receipts:

  Vulture_chrisnolan_boxoffice

Or maybe not. I can't figure out what is the scale used here. The same-size chunks shown above display $45,000 in one case, and $87 million in another!

So the designer kneaded together three flawed axes. Or perhaps the designer just banished the idea of an axis. But this experiment floundered.

***

Here is the data in three separate line charts:

Redo_chrisnolanfilms

***

In a Trifecta Checkup (link), the Vulture chart falls into Type DV. The question might be the relationship between running time and box office, and between Rotten Tomatoes Score and box office. These are very difficult to answer.

The box office number here refers to the lifetime gross ticket receipts from theaters. The movie industry insists on publishing these unadjusted numbers, which are completely useless. At the minimum, these numbers should be adjusted for inflation (ticket prices) and for population growth, if we are to use them to measure commercial success.

The box office number is also suspect because it ignores streaming, digital, syndication, and other forms of revenues. This is a problem because we are comparing movies across time.

You might have noticed that both running time and box office numbers have gone up over time. (That is to say, running time and box office numbers are highly correlated.) Do you think that is because moviegoers are motivated to see longer films, or because movies are just getting longer?

 

 

PS. [12/15/2014] I will have a related discussion on the statistics behind this data on my sister blog. Link will be active Monday afternoon.


A small step for interactivity

Alberto links to a nice Propublica chart on average annual spend per dialysis patient on ambulances by state. (link to chart and article)

Propublica_ambulance

It's a nice small-multiples setup with two tabs, one showing the states in order of descending spend and the other, alphabetical.

In the article itself, they excerpt the top of the chart containing the states that have suspiciously high per-patient spend.

Several types of comparisons are facilitated: comparison over time within each state, comparison of each state against the national average, comparison of trend across states, and comparison of state to state given the year.

The first comparison is simple as it happens inside each chart component.

The second type of comparison is enabled by the orange line being replicated on every component. (I'd have removed the columns from the first component as it is both redundant and potentially confusing, although I suspect that the designer may need it for technical reasons.)

The third type of comparison is also relatively easy. Just look at the shape of the columns from one component to the next.

The fourth type of comparison is where the challenge lies for any small-multiples construction. This is also a secret of this chart. If you mouse over any year on any component, every component now highlights that particular year's data so that one can easily make state by state comparisons. Like this for 2008:

Propublica_ambulance_2008

You see that every chart now shows 2008 on the horizontal axis and the data label is the amount for 2008. The respective columns are given a different color. Of course, if this is the most important comparison, then the dimensions should be switched around so that this particular set of comparisons occurs within a chart component--but obviously, this is a minor comparison so it gets minor billing.

***

I love to see this type of thoughtfulness! This is an example of using interactivity in a smart way, to enhance the user experience.

The Boston subway charts I featured before also introduce interactivity in a smart way. Make sure you read that post.

Also, I have a few comments about the data analysis on the sister blog.