A note to science journal editors: require better visuals

In reviewing a new small-scale study of the Moderna vaccine, I found this chart:

Modernahalfdoses_fig3a

This style of charts is quite common in scientific papers. And they are horrible. It irks me to think that some authors are forced to adopt such styles.

The study's main goal is to compare two half doses to two full doses of the Moderna vaccine. (To understand the science, read the post on my book blog.) The participants were stratified by age group. The vaccine is expected to work better for younger people than for older people. The point of the study isn't to measure the difference by age group, and so the age-group dimension is secondary.

Upon recognizing that, I reduce the number of colors from 4 to 2:

Junkcharts_redo_modernahalfdoses_1

Halving the number of colors presents no additional difficulty. The reader spends less time cross-referencing.

The existence of the Pbo (placebo) and Conv (convalescent plasma) columns on the sides is both unsightly and suboptimal. The "Conv" serves as a reference level for the amount of antibodies the vaccine stimulates in people. A better way to display reference levels is using reference lines.

Junkcharts_redo_modernahalfdoses_2color

The biggest problem with the chart is the log scale on the vertical axis. This isn't even a log-10 but a log-2. (Each tick is a doubling of value.)

Take the first set of columns as an example. The second column is clearly less than twice the height of the first column, and yet 25 is 3.5 times bigger than 7.  The third column is also visually less than double the size of the second column, and yet 189 is 7.5 times bigger than 25. The areas (heights) of the columns do not convey the right information about relative sizes of the underlying data.

Here's an amusing observation. The brown area shaded below is half of the entire area of the chart - if we reverted it to a linear scale. And yet there is not a single data point above 250 in the data so the brown area is entirely empty.

Junkcharts_redo_modernahalfdoses_logscale

An effect of a log scale is to compress the larger values of a dataset. That's what you're seeing here.

I now revisualize using dotplots:

Junkcharts_redo_modernahalfdoses_dotplotlinear

The version on the left retains the log scale while the right one (pun intended) reverts to the linear scale.

The biggest effect by far is the spike of antibodies between day 29 and 43 - which is after the second shot is administered. (For Moderna, the second shot is targeted for day 28.) In fact, it is during that window that the level of antibodies went from below the "conv" level (i.e. from natural infection) to far above.

The log-scale version buries this finding because it squeezes the large numbers on the chart. In addition, it artificially pulls the small numbers toward the "Conv" level. On the right chart, the second dot for 18-54, full doses is only at half the level of "Conv"  but it looks tantalizing close to the "Conv" level on the left chart.

The authors of the study also claim that there is negligible dropoff by 30 days after the second dose, i.e. between the third and fourth dots in each set. That may be so on the log-scale chart but on the linear chart, we see a moderate reduction. I don't believe the size of this study allows us to make a stronger conclusion but the claim of no dropoff is dubious.

The left chart also obscures the age-group differences. It appears as if all four sets show roughly the same pattern. With the linear scale, we notice that the vaccine clearly works better for the younger subgroup. As I discussed on the book blog, no one actually knows what level of antibodies constitutes "protection," and so I can't say whether that age-group difference has practical significance.

***

I recommend using log scales sparingly and carefully. They are a source of much mischief and misadventure.

 

 

 


Handling partial data on graphics

Last week, I posted on the book blog a piece about excess deaths and accelerated deaths (link). That whole piece is about how certain types of analysis have to be executed at certain moments of time.  The same analysis done at the wrong time yields the wrong conclusions.

Here is a good example of what I'm talking about. This is a graph of U.S. monthly deaths from Covid-19 during the entire pandemic. The chart is from the COVID Tracking Project, although I pulled it down from my Twitter feed.

Covidtracking_monthlydeaths

There is nothing majorly wrong with this column chart (I'd remove the axis labels). But there is a big problem. Are we seeing a boomerang of deaths from November to December to January?

Junkcharts_covidtrackingproject_monthlydeaths_1

Not really. This trend is there only because the chart is generated on January 12. The last column contains 12 days while the prior two columns contain 30-31 days.

Junkcharts_covidtrackingproject_monthlydeaths_2

The Trifecta Checkup picks up this problem. What the visual is showing isn't what the data are saying. I'd call this a Type D chart.

***

What to fix this?

One solution is to present partial data for all the other columns, so that the readers can compare the January column to the others.

Junkcharts_covidtrackingmonthydeaths_first12days

One critique of this is the potential seasonality. The first 38% (12 out of 31) of a month may not be comparable across months. A further seasonal adjustment makes this better - if we decide the benefits outweight the complexity.

Another solution is to project the full-month tally.

Junkcharts_covidtrackingmonthydeaths_projected

The critique here is the accuracy of the projection.

But the point is that not making the adjustment would be worse.

 

 


These are the top posts of 2020

It's always very interesting as a writer to look back at a year's of posts and find out which ones were most popular with my readers.

Here are the top posts on Junk Charts from 2020:

How to read this chart about coronavirus risk

This post about a New York Times scatter plot dates from February, a time when many Americans were debating whether Covid-19 was just the flu.

Proportions and rates: we are no dupes

This post about a ArsTechnica chart on the effects of Covid-19 by age is an example of designing the visual to reflect the structure of the data.

When the pie chart is more complex than the data

This post shows a 3D pie chart which is worse than a 2D pie chart.

Twitter people upset with that Covid symptoms diagram

This post discusses some complicated graphics designed to illustrate complicated datasets on Covid-19 symptoms.

Cornell must remove the logs before it reopens in the fall

This post is another warning to think twice before you use log scales.

What is the price of objectivity?

This post turns an "objective" data visualization into a piece of visual story-telling.

The snake pit chart is the best election graphic ever

This post introduces my favorite U.S. presidential election graphic, designed by the FiveThirtyEight team.

***

Here is a list of posts that deserve more attention:

Locating the political center

An example of bringing readers as close to the insights as possible

Visualizing change over time

An example of designing data visualization to reflect the structure of multivariate data

Bloomberg made me digest these graphics slowly

An example of simple and thoughtful graphics

The hidden bad assumption behind most dual-axis time-series charts

Read this before you make a dual-axis chart

Pie chart conventions

Read this before you make a pie chart

***
Looking forward to bring you more content in 2021!

Happy new year.


This holiday retailers hope it will snow dollars

According to the Conference Board, the pandemic will not deter U.S. consumers from emptying their wallets this holiday season. Here's a chart that shows their expectation (link):

COVID-19-Holiday-Spend-847

 

A few little things make this chart work:

The "More" category is placed on the left, as English-speaking countries tend to be read Left-to-Right, and it is also given the deepest green, drawing our attention.

Only the "More" segments have data labels. I'd have omitted the decimals. I suspect they are added because financial analysts may be multiplying these percentages to yield dollar amounts, in which case the extra precision helps.

The categories are ordered by the decreasing propensity of increased spending this year relative to last year. (The business community has an optimism bias.)

The choice of three shades of one color instead of three different colors keeps the chart clean.

***

The use of snowflakes surely infuriates a hardcore Tufte fan although I like that they add a festive note to the presentation. The large snowflake isn't randomly positioned but placed exactly where it causes the least interference with the bar chart.

 


Visualizing change over time: case study via Arstechnica

ArsTechnica published the following chart in its article titled "Grim new analyses spotlight just how hard the U.S. is failing in  pandemic" (link).

Artechnica-covid-mortality

There are some very good things about this chart, so let me start there.

In a Trifecta Checkup, I'd give the Q corner high marks. The question is clear: how has the U.S. performed relative to other countries? In particular, the chart gives a nuanced answer to this question. The designer realizes that there are phases in the pandemic, so the same question is asked three times: how has the U.S. performed relative to other countries since June, since May, and since the start of the pandemic?

In the D corner, this chart also deserves a high score. It selects a reasonable measure of mortality, which is deaths per population. It simplifies cognition by creating three grades of mortality rates per 100,000. Grade A is below 5 deaths, Grade B, between 5 and 25, and Grade C is above 25. 

A small deduction for not including the source of the data (the article states it's from a JAMA article). If any reader notices problems with the underlying data or calculations, please leave a comment.

***

So far so good. And yet, you might feel like I'm over-praising a chart that feels distinctly average. Not terrible, not great.

The reason for our ambivalence is the V corner. This is what I call a Type V chart. The visual design isn't doing justice to the underlying question and data analysis.

The grouped bar chart isn't effective here because the orange bars dominate our vision. It's easy to see how each country performed over the course of the pandemic but it's hard to learn how countries compare to each other in different periods.

How are the countries ordered? It would seem like the orange bars may be the sorting variable but this interpretation fails in the third group of countries.

The designer apparently made the decision to place the U.S. at the bottom (i.e. the worst of the league table). As I will show later, this is justified but the argument cannot be justified by the orange bars alone. The U.S. is worse in both the blue and purple bars but not the orange.

This points out that there is interest in the change in rates (or ranks) over time. And in the following makeover, I used the Bumps chart as the basis, as its chief use is in showing how ranking changes over time.

Redo_junkcharts_at_coviddeathstable_1

 

Better clarity can often be gained by subtraction:

Redo_junkcharts_at_coviddeathstable_2


Avoid concentric circles

A twitter follower sent me this chart by way of Munich:

Msc_staggereddonut

The logo of the Munich Security Conference (MSC) is quite cute. It looks like an ear. Perhaps that inspired this, em, staggered donut chart.

I like to straighten curves out so the donut chart becomes a bar chart:

Redo_junkcharts_msc_germanallies_distortion

The blue and gray bars mimic the lengths of the arcs in the donut chart. The yellow bars show the relative size of the underlying data. You can see that three of the four arcs under-represent the size of the data.

Why is that so? It's due to the staggering. Inner circles have smaller circumferences than outer circles. The designer keeps the angles the same so the arc lengths have been artificially reduced.

Junkcharts_redo_munichgermanallies_donuts

***

The donut chart is just a pie chart with a hole punched in the middle. For both pie charts and donut charts, the data are encoded in the angles at the center of the circle. Under normal circumstances, pie charts can also be read by comparing sector areas, and donut charts using arc lengths, as those are proportional to the angles.

The area and arc interpretation fails when the designer alters the radii of the sections. Look at the following pair of pie charts, produced by filling the hole in the above donuts:

Junkcharts_redo_munichgermanallies_pies

The staggered pie chart distorts the data if the reader compares areas but not so if the reader compares angles at the center. The pie chart can be read both ways so long as the designer does not alter the radii.

 


Bloomberg made me digest these graphics slowly

Ask the experts to name the success metric of good data visualization, and you will receive a dozen answers. The field doesn't have an all-encompassing metric. A useful reference is Andrew Gelman and Antony Urwin (2012) in which they discussed the tradeoff between beautiful and informative, which derives from the familiar tension between art and science.

For a while now, I've been intrigued by metrics that measure "effort". Some years ago, I described the concept of a "return on effort" in this post. Such a metric can be constructed like the dominating financial metric of return on investment. The investment here is an investment of time, of attention. I strongly believe that if the consumer judges a data visualization to be compelling, engaging or  ell constructed, s/he will expend energy to devour it.

Imagine grub you discard after the first bite, compared to the delicious food experienced slowly, savoring every last bit.

Bloomberg_ambridge_smI'm writing this post while enjoying the September issue of Bloomberg Businessweek, which focuses on the upcoming U.S. Presidential election. There are various graphics infused into the pages of the magazine. Many of these graphics operate at a level of complexity above what typically show up in magazines, and yet I spent energy learning to understand them. This response, I believe, is what visual designers should aim for.

***

Today, I discuss one example of these graphics, shown on the right. You might be shocked by the throwback style of these graphics. They look like they arrived from decades ago!

Grayscale, simple forms, typewriter font, all caps. Have I gone crazy?

The article argues that a town like Ambridge in Beaver County, Pennslyvania may be pivotal in the November election. The set of graphics provides relevant data to understand this argument.

It's evidence that data visualization does not need whiz-bang modern wizardry to excel.

Let me focus on the boxy charts from the top of the column. These:

Bloomberg_ambridge_topboxes

These charts solve a headache with voting margin data in the U.S.  We have two dominant political parties so in any given election, the vote share data split into three buckets: Democratic, Republican, and a catch-all category that includes third parties, write-ins, and none of the above. The third category rarely exceeds 5 percent.  A generic pie chart representation looks like this:

Redo_junkcharts_bloombergambridgebox_pies

Stacked bars have this look:

Redo_junkcharts_bloombergambridgebox_bars

In using my Trifecta framework (link), the top point is articulating the question. The primary issue here is the voting margin between the winner and the second-runner-up, which is the loser in what is typically a two-horse race. There exist two sub-questions: the vote-share difference between the top two finishers, and the share of vote effectively removed from the pot by the remaining candidates.

Now, take another look at the unusual chart form used by Bloomberg:

Bloomberg_ambridge_topboxes1

The catch-all vote share sits at the bottom while the two major parties split up the top section. This design demonstrates a keen understanding of the context. Consider the typical outcome, in which the top two finishers are from the two major parties. When answering the first sub-question, we can choose the raw vote shares, or the normalized vote shares. Normalizing shifts the base from all candidates to the top two candidates.

The Bloomberg chart addresses both scales. The normalized vote shares can be read directly by focusing only on the top section. In an even two-horse race, the top section is split by half - this holds true regardless of the size of the bottom section.

This is a simple chart that packs a punch.

 


Making better pie charts if you must

I saw this chart on an NYU marketing twitter account:

LATAMstartupCEO_covidimpact

The graphical design is not easy on our eyes. It's just hard to read for various reasons.

The headline sounds like a subject line from an email.

The subheaders are long, and differ only by a single word.

Even if one prefers pie charts, they can be improved by following a few guidelines.

First, start the first sector at the 12-oclock direction. Like this:

Redo_junkcharts_latamceo_orientation

The survey uses a 5-point scale from "Very Good" to "Very Bad". Instead of using five different colors, it's better to use two extreme colors and shading. Like this:

Redo_junkcharts_latamceo_color

I also try hard to keep all text horizontal.

Redo_junkcharts_latamceo_labels

For those who prefers not to use pie charts, a side-by-side bar chart works well.

Redo_junkcharts_latamceo_bars

In my article for DataJournalism.com, I outlined "unspoken rules" for making various charts, including pie charts.

 

 

 


Election visuals: three views of FiveThirtyEight's probabilistic forecasts

As anyone who is familiar with Nate Silver's forecasting of U.S. presidential elections knows, he runs a simulation that explores the space of possible scenarios. The polls that provide a baseline forecast make certain assumptions, such as who's a likely voter. Nate's model unshackles these assumptions from the polling data, exploring how the outcomes vary as these assumptions shift.

In the most recent simulation, his computer explores 40,000 scenarios, each of which predicts a split of the electoral vote, from which the winner of the election can be determined. The model's outcome is usually summarized by a winning probability, which is just the proportion of scenarios under which one candidate wins.

This type of forecasting was responsible for the infamous meltdown in 2016 when most of these models - Nate's being an exception - issued extremely confident predictions that Hillary Clinton wins with 95% or higher probability. Essentially, the probability distribution collapses to a point. This is analogous to an extremely narrow confidence band, indicating almost zero uncertainty about the event. It was as if almost all of the 40,000 scenarios predicted Clinton to be the winner.

The 538 data team has come up with various ways of visualizing the outputs of the model (link). The entire post is worth reading. Here, I'll highlight the most scientific, and direct visual representation, which is the third display.

538_pdf_pair

We start by looking at the bottom of the two charts, showing the predicted electoral votes won  by Democratic challenger Joe Biden, in each of the 40,000 scenarios. Our attention is directed to the thick line that gives the relative chance of Biden's electoral-vote tally. This line is a smoothed summary of the columns in the background, which show the number of times the simulation produces each electoral-vote count.

The highlighted, right side of the chart recounts scenarios in which Biden becomes President, that is to say, he wins more than 270 electoral votes (out of 538, doh). The faded, left side represents scenarios in which Biden is defeated and Trump wins a second term.

The reason I focused on the bottom chart is that the top chart is merely a mirror image of this one. Just reflect the bottom chart around the vertical axis of 270 electoral votes, change the color scheme to red, and swap annotations related to Trump and Biden, and you get the other chart. This is because the narrative has excluded third-party and write-in candidates, leaving us with a zero-sum situation.

Alternatively, one can jam both charts into one, while supplying extra labels, like this:

Redo_junkcharts_538forecastpdf_1

I prefer the denser single chart because my mind wanders away searching for extra meaning when chart elements are mirrored.

One advantage of the mirrored presentation is that the probability profiles of the potential Trump or Biden wins can be directly compared. We learn that Trump's winning margins are smaller, rarely above 150, and never above 250.

This comparison is made easier by flipping left side of the chart onto the right side:

Redo_junkcharts_538forecastpdf_2

Those are three different visualizations using the same chart form. I'd have to run a poll to figure out which is the best. What's your opinion?


Putting vaccine trials in boxes

Bloomberg Businessweek has a special edition about vaccines, and I found this chart on the print edition:

Bloombergbw_vaccinetrials_sm

The chart's got a lot of white space. Its structure is a series of simple "treemaps," one for each type of vaccine. Though simple, such a chart burns a few brain cells.

Here, I've extracted the largest block, which corresponds to vaccines that work with the virus's RNA/DNA. I applied a self-sufficiency test, removing the data from the boxes. 

Redo_junkcharts_bloombergbw_vaccinetrials_0

What proportion of these projects have moved from pre-clinical to Phase 1?  To answer this question, we have to understand the relative areas of boxes, since that's how the data are encoded. How many yellow boxes can fit into the gray box?

It's not intuitive. We'd need a ruler to do this task properly.

Then, we learn that the gray box is exactly 8 times the size of the yellow box (72 projects are pre-clinical while 9 are in Phase I). We can cram eight yellows into the gray box. Imagine doing that, and it's pretty clear the visual elements fail to convey the meaning of the data.

Self-sufficiency is the idea that a data graphic should not rely on printed data to convey its meaning; the visual elements of a data graphic should bear much of the burden. Otherwise, use a data table. To test for self-sufficiency, cover up the printed data and see if the chart still works.

***

A key decision for the designer is the relative importance of (a) the number of projects reaching Phase III, versus (b) the number of projects utilizing specific vaccine strategies.

This next chart emphasizes the clinical phases:

Redo_junkcharts_bloombergbw_vaccinetrials_2

 

Contrast this with the version shown in the online edition of Bloomberg (link), which emphasizes the vaccine strategies.

Bloombergbwonline_vaccinetrials

If any reader can figure out the logic of the ordering of the vaccine strategies, please leave a comment below.