« March 2020 | Main | May 2020 »

This exercise plan for your lock-down work-out is inspired by Venn

A twitter follower did not appreciate this chart from Nature showing the collection of flu-like symptoms that people reported they have to an UK tracking app. 

Nature tracking app venn diagram

It's a super-complicated Venn diagram. I have written about this type of chart before (see here); it appears to be somewhat popular in the medicine/biology field.

A Venn diagram is not a data visualization because it doesn't plot the data.

Notice that the different compartments of the Venn diagram do not have data encoded in the areas. 

The chart also fails the self-sufficiency test because if you remove the data from it, you end up with a data container - like a world map showing country boundaries and no data.

If you're new here: if a graphic requires the entire dataset to be printed on it for comprehension, then the visual elements of the graphic are not doing any work. The graphic cannot stand on its own.

When the Venn diagram gets complicated, teeming with many compartments, there will be quite a few empty compartments. If I have to make this chart, I'd be nervous about leaving out a number or two by accident. An empty cell can be truly empty or an oversight.

Another trap is that the total doesn't add up. The numbers on this graphic add to 1,764 whereas the study population in the preprint was 1,702. Interestingly, this diagram doesn't show up in the research paper. Given how they winnowed down the study population from all the app downloads, I'm sure there is an innocent explanation as to why those two numbers don't match.

***

The chart also strains the reader. Take the number 18, right in the middle. What combination of symptoms did these 18 people experience? You have to figure out the layers sitting beneath the number. You see dark blue, light blue, orange. If you blink, you might miss the gray at the bottom. Then you have to flip your eyes up to the legend to map these colors to diarrhoea, shortness of breath, anosmia, and fatigue. Oops, I missed the yellow, which is the cough. To be sure, you look at the remaining categories to see where they stand - I've named all of them except fever. The number 18 lies outside fever so this compartment represents everything except fever. 

What's even sadder is there is not much gain from having done it once. Try to interpret the number 50 now. Maybe I'm just slow but it doesn't get better the second or third time around. This graphic not only requires work but painstaking work!

Perhaps a more likely question is how many people who had a loss of smell also had fever. Now it's pretty easy to locate the part of the dark gray oval that overlaps with the orange oval. But now, I have to add all those numbers, 69+17+23+50+17+46 = 222. That's not enough. Next, I must find the total of all the numbers inside the orange oval, which is 222 plus what is inside the orange and outside the dark gray. That turns out to be 829. So among those who had lost smell, the proportion who also had fever is 222/(222+829) = 21 percent. 

How many people had three or more symptoms? I'll let you figure this one out!

 

 

 

 

 

 

 


Graphing the extreme

The Covid-19 pandemic has brought about extremes. So many events have never happened before. I doubt The Conference Board has previously seen the collapse of confidence in the economy by CEOs. Here is their graphic showing this extreme event:

Tcb_COVID-19-CEO-confidence-1170

To appreciate this effort, you have to see the complexity of the underlying data. There is a CEO Confidence Measure. The measure has three components. Each component is scored on a scale probably from 0 to 100, with 5o as the middle. Then, the components are aggregated into an overall score. The measure is repeatedly estimated over time, and they did two surveys during the Pandemic, pre and post the lockdown in the U.S. And then, there's the rightmost column, which provides another reference point for one of the components of the measure.

One can easily get one's limbs tied up in knots trying to tame this beast.

Of course, the tiny square stands out. CEOs have a super pessimistic outlook for the next 6 months for overall economy. The number 3 on this scale probably means almost every respondent has a negative view. 

The grid arrangement does not appear attractive but it is terrifically functional. The grid delivers horizontal and vertical comparisons. Moving vertically, we learn that even at the start of the year, the average sentiment was negative (9 points below 50), then it lost another 10 points, and finally imploded.

Moving horizontally, we can compare related metrics since everything is conveniently expressed in the same scale. While CEOs are depressed about the overall economy, they have slightly more faith about their own industry. And then moving left, we learn that many CEOs expect a V-shaped recovery, a really fast bounceback within 6 months. 

As the Conference Board surveys this group again in the near future, I wonder if the optimism still holds. 

The Conference Board has an entire set of graphics about the economic crisis of Covid-19 here. For some reason, they don't let me link to a specific chart so I can't directly link to the chart. 
 


Reviewing the charts in the Oxford Covid-19 study

On my sister (book) blog, I published a mega-post that examines the Oxford study that was cited two weeks ago as a counterpoint to the "doomsday" Imperial College model. These studies bring attention to the art of statistical modeling, and those six posts together are designed to give you a primer, and you don't need math to get a feel.

One aspect that didn't make it to the mega-post is the data visualization. Sad to say, the charts in the Oxford study (link) are uniformly terrible. Figure 3 is typical:

Oxford_covidmodel_fig3

There are numerous design decisions that frustrate readers.

a) The graphic contains two charts, one on top of the other. The left axis extends floor-to-ceiling, giving the false impression that it is relevant to both charts. In fact, the graphic uses dual axes. The bottom chart references the axis shown in the bottom right corner; the left axis is meaningless. The two charts should be drawn separately.

For those who have not read the mega-post about the Oxford models, let me give a brief description of what these charts are saying. The four colors refer to four different models - these models have the same structure but different settings. The top chart shows the proportion of the population that is still susceptible to infection by a certain date. In these models, no one can get re-infected, and so you see downward curves. The bottom chart displays the growth in deaths due to Covid-19. The first death in the UK was reported on March 5.  The black dots are the official fatalities.

b) The designer allocates two-thirds of the space to the top chart, which has a much simpler message. This causes the bottom chart to be compressed beyond cognition.

c) The top chart contains just five lines, smooth curves of the same shape but different slopes. The designer chose to use thick colored lines with black outlines. As a result, nothing precise can be read from the chart. When does the yellow line start dipping? When do the two orange lines start to separate?

d) The top chart should have included margins of error. These models are very imprecise due to the sparsity of data.

e) The bottom chart should be rejected by peer reviewers. We are supposed to judge how well each of the five models fits the cumulative death counts. But three design decisions conspire to prevent us from getting the answer: (i) the vertical axis is severely compressed by tucking this chart underneath the top chart (ii) the vertical axis uses a log scale which compresses large values and (iii) the larger-than-life dots.

As I demonstrated in this post also from the sister blog, many models especially those assuming an exponential growth rate has poor fits after the first few days. Charting in log scale hides the degree of error.

f) There is a third chart squeezed into the same canvass. Notice the four little overlapping hills located around Feb 1. These hills are probability distributions, which are presented without an appropriate vertical axis. Each hill represents a particular model's estimate of the date on which the novel coronavirus entered the UK. But that date is unknowable. So the model expresses this uncertainty using a probability distribution. The "peak" of the distribution is the most likely date. The spread of the hill gives the range of plausible dates, and the height at a given date indicates the chance that that is the date of introduction. The missing axis is a probability scale, which is neither the left nor the right axis.

***

The bottom chart shows up in a slightly different form as Figure 1(A).

Oxford_covidmodels_Fig1A

Here, the green, gray (blocked) and red thick lines correspond to the yellow/orange/red diamonds in Figure 3. The thin green and red lines show the margins of error I referred to above (these lines are not explicitly explained in the chart annotation.) The actual counts are shown as white rather than black diamonds.

Again, the thick lines and big diamonds conspire to swamp the gaps between model fit and actual data. Again, notice the use of a log scale. This means that the same amount of gap signifies much bigger errors as time moves to the right.

When using the log scale, we should label it using the original units. With a base 10 logarithm, the axis should have labels 1, 10, 100, 1000 instead of 0, 1, 2, 3. (This explains my previous point - why small gaps between a model line and a diamond can mean a big error as the counts go up.)

Also notice how the line of white diamonds makes it impossible to see what the models are doing prior to March 5, the date of the first reported death. The models apparently start showing fatalities prior to March 5. This is a key part of their conclusion - the Oxford team concluded that the coronavirus has been circulating in the U.K. even before the first infection was reported. The data visualization should therefore bring out the difference in timing.

I hope by the time the preprint is revised, the authors will have improved the data visualization.

 

 

 


The hidden bad assumption behind most dual-axis time-series charts

[Note: As of Monday afternoon, Typepad is having problems rendering images. Please try again later if the charts are not loading properly.]

DC sent me the following chart over Twitter. It supposedly showcases one sector that has bucked the economic collapse, and has conversely been boosted by the stay-at-home orders around the world.

Covid19-pornhubtraffic


At first glance, I was drawn to the yellow line and the axis title on the right side. I understood the line to depict the growth rate in traffic "vs a normal day". The trend is clear as day. Since March 10 or so, the website has become more popular by the week.

For a moment, I thought the thin black line was a trendline that fits the rather ragged traffic growth data. But looking at the last few data points, I was afraid it was a glove that didn't fit. That's when I realized this is a dual-axis chart. The black line shows the worldwide total Covid-19 cases, with the axis shown on the left side.

As with any dual-axis charts, you can modify the relationship between the two scales to paint a different picture.

This next chart says that the site traffic growth lagged Covid-19 growth until around March 14.

Junkcharts_ph_dualaxis1

This one gives an ambiguous picture. One can't really say there is a strong correlation between the two time series.

Junkcharts_ph_dualaxis2

***

Now, let's look at the chart from the DATA corner of the Trifecta Checkup (link). The analyst selected definitions that are as far apart as possible. So this chart gives a good case study of the intricacy of data definitions.

First, notice the smoothness of the line of Covid-19 cases. This data series is naturally "smoothed" because it is an aggregate of country-level counts, which themselves are aggregates of regional counts.

By contrast, the line of traffic growth rates has not been smoothed. That's why we see sharp ups and downs. This series should be smoothed as well.

Junkcharts_ph_smoothedtrafficgrowth

The seven-day moving average line indicates a steady growth in traffic. The day-to-day fluctuations represent noise that distracts us from seeing the trendline.

Second, the Covid-19 series is a cumulative count, which means it's constantly heading upward over time (on rare days, it may go flat but never decrease). The traffic series represents change, is not cumulative, and so it can go up or down over time. To bring the data closer together, the Covid-19 series can be converted into new cases so they are change values.

Junkcharts_ph_smoothedcovidnewcases

Third, the traffic series are growth rates as percentages while the Covid-19 series are counts. It is possible to turn Covid-19 counts into growth rates as well. Like this:

Junkcharts_ph_smoothedcovidcasegrowth

By standardizing the units of measurement, both time series can be plotted on the same axis. Here is the new plot:

Redo_junkcharts_ph_trafficgrowthcasegrowth

Third, the two growth rates have different reference levels. The Covid-19 growth rate I computed is day-on-day growth. This is appropriate since we don't presume there is a seasonal effect - something like new cases on Mondays are typically larger than new cases on Tuesday doesn't seem plausible.

Thanks to this helpful explainer (link), I learned what the data analyst meant by a "normal day". The growth rate of traffic is not day-on-day change. It is the change in traffic relative to the average traffic in the last four weeks on the same day of week. If it's a Monday, the change in traffic is relative to the average traffic of the last four Mondays.

This type of seasonal adjustment is used if there is a strong day-of-week effect. For example, if the website reliably gets higher traffic during weekends than weekdays, then the Saturday traffic may always exceed the Friday traffic; instead of comparing Saturday to the day before, we index Saturday to the previous Saturday, Friday to the previous Friday, and then compare those two values.

***

Let's consider the last chart above, the one where I got rid of the dual axes.

A major problem with trying to establish correlation of two time series is time lag. Most charts like this makes a critical and unspoken assumption - that the effect of X on Y is immediate. This chart assumes that the higher the number Covid-19 cases, the more people stays home that day, the more people swarms the site that day. Said that way, you might see it's ridiculous.

What is true of any correlations in the wild - there is always some amount of time lag. It usually is hard to know how much lag.

***

Finally, the chart omitted a huge factor driving the growth in traffic. At various times dependent on the country, the website rolled out a free premium service offer. This is the primary reason for the spike around mid March. How much of the traffic growth is due to the popular marketing campaign, and how much is due to stay-at-home orders - that's the real question.


An exposed seam in the crystal ball of coronavirus recovery

One of the questions being asked by the business community is when the economy will recover and how. The Conference Board has offered their outlook in this new article. (This link takes you to the collection of Covid-19 related graphics. You have to find the right one from the carousel. I can't seem to find the direct link to that page.)

This chart summarizes their viewpoint:

TCB-COVID-19-US-level-of-GDP-1170

They considered three scenarios, starting the recovery in May, over the summer, and in the Fall. In all scenarios, the GDP of the U.S. will contract in 2020 relative to 2019. The faster the start of the recovery, the lower the decline.

My reaction to the map icon is different from the oil-drop icon in the previously-discussed chart (link). I think here, the icon steals too much attention. The way lines were placed on the map initially made me think the chart is about cross-country travel.

On the other hand, I love the way he did the horizontal axis / time-line. It elegantly tells us which numbers are actual and which numbers are projected, without explicitly saying so.

Tcb_timelineaxis

Also notice through the use of color, font size and bolding, he organizes the layers of detail, and conveys which items are more important to read first.

***

Trifectacheckup_imageAs I round out the Trifecta Checkup, I found a seam in the Data.

On the right edge, the number for December 2020 is 100.6 which is 0.6 above the reference level. But this number corresponds to a 1.6% reduction. How so?

This seam exposes a gap between how modelers and decision-makers see the world. Evidently, the projections by the analyst are generated using Q3 2019's GDP as baseline (index=100). I'm guessing the analyst chose that quarter because at the time of analysis, the Q4 data have not reached the final round of revision (which came out at the end of March).

A straight-off-the-report conclusion of the analysis is that the GDP would be just back to Q3 2019 level by December 2020 in the most optimistic scenario. (It's clear to me that the data series has been seasonally adjusted as well so that we can compare any month to any month. Years ago, I wrote this primer to understand seasonal adjustments.)

Decision-makers might push back on that conclusion because the reference level of Q3 2019 seems arbitrary. Instead, what they like to know is the year-on-year change to GDP. A small calculation is completed to bridge between the two numbers.

The decision-makers are satisfied after finding the numbers they care about. They are not curious about how the sausage is made, i.e., how the monthly numbers result in the year-on-year change. So the seam is left on the chart.

 


The why axis

A few weeks ago, I replied to a tweet by someone who was angered by the amount of bad graphics about coronavirus. I take a glass-half-full viewpoint: it's actually heart-warming for  dataviz designers to realize that their graphics are being read! When someone critiques your work, it is proof that they cared enough to look at it. Worse is when you publish something, and no one reacts to it.

That said, I just wasted half an hour trying to get into the head of the person who made the following:

Fox31_co_newcases edited

Longtime reader Chris P. forwarded this tweet to me, and I saw that Andrew Gelman got sent this one, too.

The chart looked harmless until you check out the vertical axis labels. It's... um... the most unusual. The best way to interpret what the designer did is to break up the chart into three components. Like this:

Redo_junkcharts_fox31cocases

The big mystery is why the designer spent the time and energy to make this mischief.

The usual suspect is fake news. The clearest sign of malintent is the huge size of the dots. Each dot spans almost the entirety of the space between gridlines.

But there is almost no fake news here. The overall trend line is intact despite the attempted distortion. The following is a superposition of an unmanipulated line (yellow) on top of the manipulated:

Redo_junkcharts_fox31cocases2

***

The next guess is incompetence. The evidence against this view is the amount of energy required to execute these changes. In Excel, it takes a lot of work. It's easier to do this in R or any programming languages with which you can design your own axis.

Even for the R coders, the easy part is to replicate the design, but the hard part is to come up with the concept in the first place!

You can't just stumble onto a design like this. So I am not convinced the designer is an idiot.

***

How much work? You have to create three separate charts, with three carefully chosen vertical scales, and then clip, merge, and sew the seam. The weirdest bit is throwing away three of the twelve axis labels and writing in three fake numbers.

Here's the recipe: (if the gif doesn't load automatically, click on it)

Fox31_co_cases_B6

Help me readers! I'm stumped. Why oh why did someone make this? What is the point?

 

P.S. [4/9/2020] A conversation with Carlos on Andrew's blog reveals another issue. I pointed out that the "Total cases" printed up top was not the sum of the 15 numbers on the chart. There was a gap of 184 cases. Carlos sent me a link showing a day on which the total cases in Colorado was 183 cases. I didn't quite get the point initially. He explained that it's 183 existing cases prior to the start of the period of this chart, plus the new cases during this period, leading to the "Total cases" as of the end of the period of this chart.

So, another mystery solved. This brings up an important point about making effective charts: one way confusion arises is if there are two things from the visual that seem to contradict each other. In most line charts, if there is a line, and then a "total", the natural expectation is that the "total" is the sum of the data that make up the line. In this case, that "total" is the total new cases during the time period depicted. Total new cases isn't the same as total cases from case #1.

It's clearer to say "Total Cases on 3/17 = 183; on 4/1 = 3342".

 


Make your color legend better with one simple rule

The pie chart about COVID-19 worries illustrates why we should follow a basic rule of constructing color legends: order the categories in the way you expect readers to encounter them.

Here is the chart that I discussed the other day, with the data removed since they are not of concern in this post. (link)

Junkcharts_abccovidbiggestworries_sufficiency

First look at the pie chart. Like me, you probably looked at the orange or the yellow slice first, then we move clockwise around the pie.

Notice that the legend leads with the red square ("Getting It"), which is likely the last item you'll see on the chart.

This is the same chart with the legend re-ordered:

Redo_junkcharts_abcbiggestcovidworries_legend

***

Simple charts can be made better if we follow basic rules of construction. When used frequently, these rules can be made silent. I cover rules for legends as well as many other rules in this Long Read article titled "The Unspoken Conventions of Data Visualization" (link).


Graphing the economic crisis of coronavirus 2

Last week, I discussed Ray's chart that compares the S&P 500 performance in this crisis against previous crises.

A reminder:

Tcb_stockmarketindices_fourcrises

Another useful feature is the halo around the right edge of the COVID-19 line. This device directs our eyes to where he wants us to look.

In the same series, he made the following for The Conference Board (link):

TCB-COVID-19-impact-oil-prices-640

Two things I learned from this chart:

The oil market takes a much longer time to recover after crises, compared to the S&P. None of these lines reached above 100 in the first 150 days (5 months).

Just like the S&P, the current crisis is most similar in severity to the 2008 Great Recession, only worse, and currently, the price collapse in oil is quite a bit worse than in 2008.

***
The drop of oil is going to be contentious. This is a drop too many for a Tufte purist. It might as well symbolize a tear shed.

The presence of the icon tells me these lines depict the oil market without having to read text. And I approve.