Designs of two variables: map, dot plot, line chart, table

The New York Times found evidence that the richest segments of New Yorkers, presumably those with second or multiple homes, have exited the Big Apple during the early months of the pandemic. The article (link) is amply assisted by a variety of data graphics.

The first few charts represent different attempts to express the headline message. Their appearance in the same article allows us to assess the relative merits of different chart forms.

First up is the always-popular map.

Nytimes_newyorkersleft_overallmap

The advantage of a map is its ease of comprehension. We can immediately see which neighborhoods experienced the greater exoduses. Clearly, Manhattan has cleared out a lot more than outer boroughs.

The limitation of the map is also in view. With the color gradient dedicated to the proportions of residents gone on May 1st, there isn't room to express which neighborhoods are richer. We have to rely on outside knowledge to make the correlation ourselves.

The second attempt is a dot plot.

Nytimes_newyorksleft_percentathome

We may have to take a moment to digest the horizontal axis. It's not time moving left to right but income percentiles. The poorest neighborhoods are to the left and the richest to the right. I'm assuming that these percentiles describe the distribution of median incomes in neighborhoods. Typically, when we see income percentiles, they are based on households, regardless of neighborhoods. (The former are equal-sized segments, unlike the latter.)

This data graphic has the reverse features of the map. It does a great job correlating the drop in proportion of residents at home with the income distribution but it does not convey any spatial information. The message is clear: The residents in the top 10% of New York neighborhoods are much more likely to have left town.

In the following chart, I attempted a different labeling of both axes. It cuts out the need for readers to reverse being home to not being home, and 90th percentile to top 10%.

Redo_nyt_newyorkerslefttown

The third attempt to convey the income--exit relationship is the most successful in my mind. This is a line chart, with time on the horizontal axis.

Nyt_newyorkersleft_percenthomebyincome

The addition of lines relegates the dots to the background. The lines show the trend more clearly. If directly translated from the dot plot, this line chart should have 100 lines, one for each percentile. However, the closeness of the top two lines suggests that no meaningful difference in behavior exists between the 20th and 80th percentiles. This can be conveyed to readers through a short note. Instead of displaying all 100 percentiles, the line chart selectively includes only the 99th , 95th, 90th, 80th and 20th percentiles. This is a design choice that adds by subtraction.

Along the time axis, the line chart provides more granularity than either the map or the dot plot. The exit occurred roughly over the last two weeks of March and the first week of April. The start coincided with New York's stay-at-home advisory.

This third chart is a statistical graphic. It does not bring out the raw data but features aggregated and smoothed data designed to reveal a key message.

I encourage you to also study the annotated table later in the article. It shows the power of a well-designed table.

[P.S. 6/4/2020. On the book blog, I have just published a post about the underlying surveillance data for this type of analysis.]

 

 


How the pandemic affected employment of men and women

In the last post, I looked at the overall employment situation in the U.S. Here is the trend of the "official" unemployment rate since 1990.

Junkcharts_kfung_unemployment_apr20

I was talking about the missing 100 million. These are people who are neither employed nor unemployed in the eyes of the Bureau of Labor Statistics (BLS). They are simply unrepresented in the numbers shown in the chart above.

This group is visualized in my scatter plot as "not in labor force", as a percent of the employment-age population. The horizontal axis of this scatter plot shows the proportion of employed people who hold part-time jobs. Anyone who worked at least one hour during the month is counted as employed part-time.

***

Today, I visualize the differences between men and women.

The first scatter plot shows the situation for men:

Junkcharts_unemployment_scatter_men

This plot reveals a long-term structural problem for the U.S. economy. Regardless of the overall economic health, more and more men have been declared not in labor force each year. Between 2007, the start of the Great Recession to 2019, the proportion went up from 27% to 31%, and the pandemic has pushed this to almost 34%. As mentioned in the last post, this sharp rise in April raises concern that the criteria for "not in labor force" capture a lot of people who actually want a job, and therefore should be counted as part of the labor force but unemployed.

Also, as seen in the last post, the severe drop in part-time workers is unprecedented during economic hardship. As dots turn from blue to red, they typically are moving right, meaning more part-time workers. Since the pandemic, among those people still employed, the proportion holding full-time jobs has paradoxically exploded.

***

The second scatter plot shows the situation with women:

Junkcharts_unemployment_scatter_women

Women have always faced a tougher job market. If they are employed, they are more likely to be holding part-time jobs relative to employed men; and a significantly larger proportion of women are not in the labor force. Between 1990 and 2001, more women entered the labor force. Just like men, the Great Recession resulted in a marked jump in the proportion out of labor force. Since 2014, a positive trend emerged, now interrupted by the pandemic, which has pushed both metrics to levels never seen before.

The same story persists: the sharp rise in women "not in labor force" exposes a problem with this statistic - as it apparently includes people who do want to work, not as intended. In addition, unlike the pattern in the last 30 years, the severe economic crisis is coupled with a shift toward full-time employment, indicating that part-time jobs were disappearing much faster than full-time jobs.


The missing 100 million: how the pandemic reveals the fallacy of not in labor force

Last Friday, the U.S. published the long-feared employment situation report. It should come as no surprise to anyone since U.S. businesses were quick to lay off employees since much of the economy was shut down to abate the spread of the coronavirus.

Numbersense_coverI've been following employment statistics for a while. Chapter 6 of Numbersense (link) addresses the statistical aspects of how the unemployment rate is computed. The title of the chapter is "Are they new jobs when no one can apply?" What you learn is that the final number being published starts off as survey tallies, which then undergo a variety of statistical adjustments.

One such adjustment - which ought to be controversial - results in the disappearance of 100 million Americans. I mean, that they are invisible to the Bureau of Labor Statistics (BLS), considered neither employed nor unemployed. You don't hear about them because the media report the "headline" unemployment rate, which excludes these people. They are officially designated "not in the labor force". I'll come back to this topic later in the post.

***

Last year, I used a pair of charts to visualize the unemployment statistics. I have updated the charts to include all of 2019 and 2020 up to April, the just released numbers.

The first chart shows the trend in the official unemployment rate ("U3") from 1990 to present. It's color-coded so that the periods of high unemployment are red, and the periods of low unemployment are blue. This color code will come in handy for the next chart.

Junkcharts_kfung_unemployment_apr20

The time series is smoothed. However, I had to exclude the April 2020 outlier from the smoother.

The next plot, a scatter plot, highlights two of the more debatable definitions used by the BLS. On the horizontal axis, I plot the proportion of employed people who have part-time jobs. People only need to have worked one hour in a month to be counted as employed. On the vertical axis, I plot the proportion of the population who are labeled "not in labor force". These are people who are not employed and not counted in the unemployment rate.

Junkcharts_kfung_unemployment_apr20_2

The value of data visualization is its ability to reveal insights about the data. I'm happy to report that this design succeeded.

Previously, we learned that (a) part-timers as a proportion of employment tend to increase during periods of worsening unemployment (red dots moving right) while decreasing during periods of improving employment (blue dots moving left); and (b) despite the overall unemployment rate being about the same in 2007 and 2017, the employment situation was vastly different in the sense that the labor force has shrunk significantly during the recession and never returned to normal. These two insights are still found at the bottom right corner of the chart. The 2019 situation did not differ much from 2018.

What is the effect of the current Covid-19 pandemic?

On both dimensions, we have broken records since 1990. The proportion of people designated not in labor force was already the worst in three decades before the pandemic, and now it has almost reached 40 percent of the population!

Remember these people are invisible to the media, neither employed nor unemployed. Back in February 2020, with unemployment rate at around 4 percent, it's absolutely not the case that 96 pecent of the employment-age population was employed. The number of employed Americans was just under 160 million. The population 16 years and older at the time was 260 million.

Who are these 100 million people? BLS says all but 2 million of these are people who "do not want a job". Some of them are retired. There are about 50 million Americans above 65 years old although 25 percent of them are still in the labor force, so only 38 million are "not in labor force," according to this Census report.

It would seem like the majority of these people don't want to work, are not paid enough to work, etc. Since part-time workers are counted as employed, with as little as one working hour per month, these are not the gig workers, not Uber/Lyft drivers, and not college students who has work-study or part-time jobs.

This category has long been suspect, and what happened in April isn't going to help build its case. There is no reason why the "not in labor force" group should spike immediately as a result of the pandemic. It's not plausible to argue that people who lost their jobs in the last few weeks suddenly turned into people who "do not want a job". I think this spike is solid evidence that the unemployed have been hiding inside the not in labor force number.

The unemployment rate has under-reported unemployment because many of the unemployed have been taken out of the labor force based on BLS criteria. The recovery of jobs since the Great Recession is partially nullified since the jump in "not in labor force" never returned to the prior level.

***

The other dimension, part-time employment, also showed a striking divergence from the past behavior. Typically, when the unemployment rate deteriorates, the proportion of employed people who have part-time jobs increases. However, in the current situation, not only is that not happening, but the proportion of part-timers plunged to a level not seen in the last 30 years.

This suggests that employers are getting rid of their part-time work force first.

 

 


Reviewing the charts in the Oxford Covid-19 study

On my sister (book) blog, I published a mega-post that examines the Oxford study that was cited two weeks ago as a counterpoint to the "doomsday" Imperial College model. These studies bring attention to the art of statistical modeling, and those six posts together are designed to give you a primer, and you don't need math to get a feel.

One aspect that didn't make it to the mega-post is the data visualization. Sad to say, the charts in the Oxford study (link) are uniformly terrible. Figure 3 is typical:

Oxford_covidmodel_fig3

There are numerous design decisions that frustrate readers.

a) The graphic contains two charts, one on top of the other. The left axis extends floor-to-ceiling, giving the false impression that it is relevant to both charts. In fact, the graphic uses dual axes. The bottom chart references the axis shown in the bottom right corner; the left axis is meaningless. The two charts should be drawn separately.

For those who have not read the mega-post about the Oxford models, let me give a brief description of what these charts are saying. The four colors refer to four different models - these models have the same structure but different settings. The top chart shows the proportion of the population that is still susceptible to infection by a certain date. In these models, no one can get re-infected, and so you see downward curves. The bottom chart displays the growth in deaths due to Covid-19. The first death in the UK was reported on March 5.  The black dots are the official fatalities.

b) The designer allocates two-thirds of the space to the top chart, which has a much simpler message. This causes the bottom chart to be compressed beyond cognition.

c) The top chart contains just five lines, smooth curves of the same shape but different slopes. The designer chose to use thick colored lines with black outlines. As a result, nothing precise can be read from the chart. When does the yellow line start dipping? When do the two orange lines start to separate?

d) The top chart should have included margins of error. These models are very imprecise due to the sparsity of data.

e) The bottom chart should be rejected by peer reviewers. We are supposed to judge how well each of the five models fits the cumulative death counts. But three design decisions conspire to prevent us from getting the answer: (i) the vertical axis is severely compressed by tucking this chart underneath the top chart (ii) the vertical axis uses a log scale which compresses large values and (iii) the larger-than-life dots.

As I demonstrated in this post also from the sister blog, many models especially those assuming an exponential growth rate has poor fits after the first few days. Charting in log scale hides the degree of error.

f) There is a third chart squeezed into the same canvass. Notice the four little overlapping hills located around Feb 1. These hills are probability distributions, which are presented without an appropriate vertical axis. Each hill represents a particular model's estimate of the date on which the novel coronavirus entered the UK. But that date is unknowable. So the model expresses this uncertainty using a probability distribution. The "peak" of the distribution is the most likely date. The spread of the hill gives the range of plausible dates, and the height at a given date indicates the chance that that is the date of introduction. The missing axis is a probability scale, which is neither the left nor the right axis.

***

The bottom chart shows up in a slightly different form as Figure 1(A).

Oxford_covidmodels_Fig1A

Here, the green, gray (blocked) and red thick lines correspond to the yellow/orange/red diamonds in Figure 3. The thin green and red lines show the margins of error I referred to above (these lines are not explicitly explained in the chart annotation.) The actual counts are shown as white rather than black diamonds.

Again, the thick lines and big diamonds conspire to swamp the gaps between model fit and actual data. Again, notice the use of a log scale. This means that the same amount of gap signifies much bigger errors as time moves to the right.

When using the log scale, we should label it using the original units. With a base 10 logarithm, the axis should have labels 1, 10, 100, 1000 instead of 0, 1, 2, 3. (This explains my previous point - why small gaps between a model line and a diamond can mean a big error as the counts go up.)

Also notice how the line of white diamonds makes it impossible to see what the models are doing prior to March 5, the date of the first reported death. The models apparently start showing fatalities prior to March 5. This is a key part of their conclusion - the Oxford team concluded that the coronavirus has been circulating in the U.K. even before the first infection was reported. The data visualization should therefore bring out the difference in timing.

I hope by the time the preprint is revised, the authors will have improved the data visualization.

 

 

 


Comparing chance of death of coronavirus and flu

The COVID-19 charts are proving one thing. When the topic of a dataviz is timely and impactful, readers will study the graphics and ask questions. I've been sent some of these charts lately, and will be featuring them here.

A former student saw this chart from Business Insider (link) and didn't like it.

Businesinsider_coronavirus_flu_compare

My initial reaction was generally positive. It's clear the chart addresses a comparison between death rates of the flu and COVID19, an important current question. The side-by-side panel is effective at allowing such a comparison. The column charts look decent, and there aren't excessive gridlines.

Sure, one sees a few simple design fixes, like removing the vertical axis altogether (since the entire dataset has already been printed). I'd also un-slant the age labels.

***

I'd like to discuss some subtler improvements.

A primary challenge is dealing with the different definitions of age groups across the two datasets. While the side-by-side column charts prompt readers to go left-right, right-left in comparing death rates, it's not easy to identify which column to compare to which. This is not fixable in the datasets because the organizations that compile them define their own age groups.

Also, I prefer to superimpose the death rates on the same chart, using something like a dot plot rather than a column chart. This makes the comparison even easier.

Here is a revised visualization:

Redo_businessinsider_covid19fatalitybyage

The contents of this chart raise several challenges to public health officials. Clearly, hospital resources should be preferentially offered to older patients. But young people could be spreading the virus among the community.

Caution is advised as the data for COVID19 suffers from many types of inaccuracies, as outlined here.


All these charts lament the high prices charged by U.S. hospitals

Nyt_medicalprocedureprices

A former student asked me about this chart from the New York Times that highlights much higher prices of hospital procedures in the U.S. relative to a comparison group of seven countries.

The dot plot is clearly thought through. It is not a default chart that pops out of software.

Based on its design, we surmise that the designer has the following intentions:

  1. The names of the medical procedures are printed to be read, thus the long text is placed horizontally.

  2. The actual price is not as important as the relative price, expressed as an index with the U.S. price at 100%. These reference values are printed in glaring red, unignorable.

  3. Notwithstanding the above point, the actual price is still of secondary importance, and the values are provided as a supplement to the row labels. Getting to the actual prices in the comparison countries requires further effort, and a calculator.

  4. The primary comparison is between the U.S. and the rest of the world (or the group of seven countries included). It is less important to distinguish specific countries in the comparison group, and thus the non-U.S. dots are given pastels that take some effort to differentiate.

  5. Probably due to reader feedback, the font size is subject to a minimum so that some labels are split into two lines to prevent the text from dominating the plotting region.

***

In the Trifecta Checkup view of the world, there is no single best design. The best design depends on the intended message and what’s in the available data.

To illustate this, I will present a few variants of the above design, and discuss how these alternative designs reflect the designer's intentions.

Note that in all my charts, I expressed the relative price in terms of discounts, which is the mirror image of premiums. Instead of saying Country A's price is 80% of the U.S. price, I prefer to say Country A's price is a 20% saving (or discount) off the U.S. price.

First up is the following chart that emphasizes countries instead of hospital procedures:

Redo_medicalprice_hor_dot

This chart encourages readers to draw conclusions such as "Hospital prices are 60-80 percent cheaper in Holland relative to the U.S." But it is more taxing to compare the cost of a specific procedure across countries.

The indexing strategy already creates a barrier to understanding relative costs of a specific procedure. For example, the value for angioplasty in Australia is about 55% and in Switzerland, about 75%. The difference 75%-55% is meaningless because both numbers are relative savings from the U.S. baseline. Comparing Australia and Switzerland requires a ratio (0.75/0.55 = 1.36): Australia's prices are 36% above Swiss prices, or alternatively, Swiss prices are a 64% 26% discount off Australia's prices.

The following design takes it even further, excluding details of individual procedures:

Redo_medicalprice_hor_bar

For some readers, less is more. It’s even easier to get a rough estimate of how much cheaper prices are in the comparison countries, for now, except for two “outliers”, the chart does not display individual values.

The widths of these bars reveal that in some countries, the amount of savings depends on the specific procedures.

The bar design releases the designer from a horizontal orientation. The country labels are shorter and can be placed at the bottom in a vertical design:

Redo_medicalprice_vert_bar

It's not that one design is obviously superior to the others. Each version does some things better. A good designer recognizes the strengths and weaknesses of each design, and selects one to fulfil his/her intentions.

 

P.S. [1/3/20] Corrected a computation, explained in Ken's comment.


Revisiting global car sales

We looked at the following chart in the previous blog. The data concern the growth rates of car sales in different regions of the world over time.

Cnbc zh global car sales

Here is a different visualization of the same data.

Redo_cnbc_globalcarsales

Well, it's not quite the same data. I divided the global average growth rate by four to yield an approximation of the true global average. (The reason for this is explained in the other day's post.)

The chart emphasizes how each region was helping or hurting the global growth. It also features the trend in growth within each region.

 


How to read this cost-benefit chart, and why it is so confusing

Long-time reader Antonio R. found today's chart hard to follow, and he isn't alone. It took two of us multiple emails and some Web searching before we think we "got it".

Ar_submit_Fig-3-2-The-policy-cost-curve-525

 

Antonio first encountered the chart in a book review (link) of Hal Harvey et. al, Designing Climate Solutions. It addresses the general topic of costs and benefits of various programs to abate CO2 emissions. The reviewer praised the "wealth of graphics [in the book] which present complex information in visually effective formats." He presented the above chart as evidence, and described its function as:

policy-makers can focus on the areas which make the most difference in emissions, while also being mindful of the cost issues that can be so important in getting political buy-in.

(This description is much more informative than the original chart title, which states "The policy cost curve shows the cost-effectiveness and emission reduction potential of different policies.")

Spend a little time with the chart now before you read the discussion below.

Warning: this is a long read but well worth it.

 

***

 

If your experience is anything like ours, scraps of information flew at you from different parts of the chart, and you had a hard time piecing together a story.

What are the reasons why this data graphic is so confusing?

Everyone recognizes that this is a column chart. For a column chart, we interpret the heights of the columns so we look first at the vertical axis. The axis title informs us that the height represents "cost effectiveness" measured in dollars per million metric tons of CO2. In a cost-benefit sense, that appears to mean the cost to society of obtaining the benefit of reducing CO2 by a given amount.

That's how far I went before hitting the first roadblock.

For environmental policies, opponents frequently object to the high price of implementation. For example, we can't have higher fuel efficiency in cars because it would raise the price of gasoline too much. Asking about cost-effectiveness makes sense: a cost-benefit trade-off analysis encapsulates the something-for-something principle. What doesn't follow is that the vertical scale sinks far into the negative. The chart depicts the majority of the emissions abatement programs as having negative cost effectiveness.

What does it mean to be negatively cost-effective? Does it mean society saves money (makes a profit) while also reducing CO2 emissions? Wouldn't those policies - more than half of the programs shown - be slam dunks? Who can object to programs that improve the environment at no cost?

I tabled that thought, and proceeded to the horizontal axis.

I noticed that this isn't a standard column chart, in which the width of the columns is fixed and uneventful. Here, the widths of the columns are varying.

***

In the meantime, my eyes are distracted by the constellation of text labels. The viewing area of this column chart is occupied - at least 50% - by text. These labels tell me that each column represents a program to reduce CO2 emissions.

The dominance of text labels is a feature of this design. For a conventional column chart, the labels are situated below each column. Since the width does not usually carry any data, we tend to keep the columns narrow - Tufte, ever the minimalist, has even advocated reducing columns to vertical lines. That leaves insufficient room for long labels. Have you noticed that government programs hold long titles? It's tough to capture even the outline of a program with fewer than three big words, e.g. "Renewable Portfolio Standard" (what?).

The design solution here is to let the column labels run horizontally. So the graphical element for each program is a vertical column coupled with a horizontal label that invades the territories of the next few programs. Like this:

Redo_fueleconomystandardscars

The horror of this design constraint is fully realized in the following chart, a similar design produced for the state of Oregon (lifted from the Plan Washington webpage listed as a resource below):

Figure 2 oregon greenhouse

In a re-design, horizontal labeling should be a priority.

 

***

Realizing that I've been distracted by the text labels, back to the horizontal axis I went.

This is where I encountered the next roadblock.

The axis title says "Average Annual Emissions Abatement" measured in millions metric tons. The unit matches the second part of the vertical scale, which is comforting. But how does one reconcile the widths of columns with a continuous scale? I was expecting each program to have a projected annual abatement benefit, and those would fall as dots on a line, like this:

Redo_abatement_benefit_dotplot

Instead, we have line segments sitting on a line, like this:

Redo_abatement_benefit_bars_end2end_annuallabel

Think of these bars as the bottom edges of the columns. These line segments can be better compared to each other if structured as a bar chart:

Redo_abatement_benefit_bars

Instead, the design arranges these lines end-to-end.

To unravel this mystery, we go back to the objective of the chart, as announced by the book reviewer. Here it is again:

policy-makers can focus on the areas which make the most difference in emissions, while also being mindful of the cost issues that can be so important in getting political buy-in.

The primary goal of the chart is a decision-making tool for policy-makers who are evaluating programs. Each program has a cost and also a benefit. The cost is shown on the vertical axis and the benefit is shown on the horizontal. The decision-maker will select some subset of these programs based on the cost-benefit analysis. That subset of programs will have a projected total expected benefit (CO2 abatement) and a projected total cost.

By stacking the line segments end to end on top of the horizontal axis, the chart designer elevates the task of computing the total benefits of a subset of programs, relative to the task of learning the benefits of any individual program. Thus, the horizontal axis is better labeled "Cumulative annual emissions abatement".

 

Look at that axis again. Imagine you are required to learn the specific benefit of program titled "Fuel Economy Standards: Cars & SUVs".  

Redo_abatement_benefit_bars_end2end_cumlabel

This is impossible to do without pulling out a ruler and a calculator. What the axis labels do tell us is that if all the programs to the left of Fuel Economy Standards: Cars & SUVs were adopted, the cumulative benefits would be 285 million metric tons of CO2 per year. And if Fuel Economy Standards: Cars & SUVs were also implemented, the cumulative benefits would rise to 375 million metric tons.

***

At long last, we have arrived at a reasonable interpretation of the cost-benefit chart.

Policy-makers are considering throwing their support behind specific programs aimed at abating CO2 emissions. Different organizations have come up with different ways to achieve this goal. This goal may even have specific benchmarks; the government may have committed to an international agreement, for example, to reduce emissions by some set amount by 2030. Each candidate abatement program is evaluated on both cost and benefit dimensions. Benefit is given by the amount of CO2 abated. Cost is measured as a "marginal cost," the amount of dollars required to achieve each million metric ton of abatement.

This "marginal abatement cost curve" aids the decision-making. It lines up the programs from the most cost-effective to the least cost-effective. The decision-maker is presumed to prefer a more cost-effective program than a less cost-effective program. The chart answers the following question: for any given subset of programs (so long as we select them left to right contiguously), we can read off the cumulative amount of CO2 abated.

***

There are still more limitations of the chart design.

  • We can't directly read off the cumulative cost of the selected subset of programs because the vertical axis is not cumulative. The cumulative cost turns out to be the total area of all the columns that correspond to the selected programs. (Area is height x width, which is cost per benefit multiplied by benefit, which leaves us with the cost.) Unfortunately, it takes rulers and calculators to compute this total area.

  • We have presumed that policy-makers will make the Go-No-go decision based on cost effectiveness alone. This point of view has already been contradicted. Remember the mystery around negatively cost-effective programs - their existence shows that some programs are stalled even when they reduce emissions in addition to making money!

  • Since many, if not most, programs have negative cost-effectiveness (by the way they measured it), I'd flip the metric over and call it profitability (or return on investment). Doing so removes another barrier to our understanding. With the current cost-effectiveness metric, policy-makers are selecting the "negative" programs before the "positive" programs. It makes more sense to select the "positive" programs before the "negative" ones!

***

In a Trifecta Checkup (guide), I rate this chart Type V. The chart has a great purpose, and the design reveals a keen sense of the decision-making process. It's not a data dump for sure. In addition, an impressive amount of data gathering and analysis - and synthesis - went into preparing the two data series required to construct the chart. (Sure, for something so subjective and speculative, the analysis methodology will inevitably be challenged by wonks.) Those two data series are reasonable measures for the stated purpose of the chart.

The chart form, though, has various shortcomings, as shown here.  

***

In our email exchange, Antonio and I found the Plan Washington website useful. This is where we learned that this chart is called the marginal abatement cost curve.

Also, the consulting firm McKinsey is responsible for popularizing this chart form. They have published this long report that explains even more of the analysis behind constructing this chart, for those who want further details.


Pulling the multi-national story out, step by step

Reader Aleksander B. found this Economist chart difficult to understand.

Redo_multinat_1

Given the chart title, the reader is looking for a story about multinationals producing lower return on equity than local firms. The first item displayed indicates that multinationals out-performed local firms in the technology sector.

The pie charts on the right column provide additional information about the share of each sector by the type of firms. Is there a correlation between the share of multinationals, and their performance differential relative to local firms?

***

We can clean up the presentation. The first changes include using dots in place of pipes, removing the vertical gridlines, and pushing the zero line to the background:

Redo_multinat_2

The horizontal gridlines attached to the zero line can also be removed:

Redo_multinat_3

Now, we re-order the rows. Start with the aggregate "All sectors". Then, order sectors from the largest under-performance by multinationals to the smallest.

Redo_multinat_4

The pie charts focus only on the share of multinationals. Taking away the remainders speeds up our perception:

Redo_multinat_5

Help the reader understand the data by dividing the sectors into groups, organized by the performance differential:

Redo_multinat_6

For what it's worth, re-sort the sectors from largest to smallest share of multinationals:

Redo_multinat_7

Having created groups of sectors by share of multinationals, I simplify further by showing the average pie chart within each group:

Redo_multinat_8

***

To recap all the edits, here is an animated gif: (if it doesn't play automatically, click on it)

Redo_junkcharts_econmultinat

***

Judging from the last graphic, I am not sure there is much correlation between share of multinationals and the performance differentials. It's interesting that in aggregate, local firms and multinationals performed the same. The average hides the variability by sector: in some sectors, local firms out-performed multinationals, as the original chart title asserted.


This Wimbledon beauty will be ageless

Ft_wimbledonage


This Financial Times chart paints the picture of the emerging trend in Wimbledon men’s tennis: the average age of players has been rising, and hits 30 years old for the first time ever in 2019.

The chart works brilliantly. Let's look at the design decisions that contributed to its success.

The chart contains a good amount of data and the presentation is carefully layered, with the layers nicely tied to some visual cues.

Readers are drawn immediately to the average line, which conveys the key statistical finding. The blue dot  reinforces the key message, aided by the dotted line drawn at 30 years old. The single data label that shows a number also highlights the message.

Next, readers may notice the large font that is applied to selected players. This device draws attention to the human stories behind the dry data. Knowledgable fans may recall fondly when Borg, Becker and Chang burst onto the scene as teenagers.

 

Then, readers may pick up on the ticker-tape data that display the spread of ages of Wimbledon players in any given year. There is some shading involved, not clearly explained, but we surmise that it illustrates the range of ages of most of the contestants. In a sense, the range of probable ages and the average age tell the same story. The current trend of rising ages began around 2005.

 

Finally, a key data processing decision is disclosed in chart header and sub-header. The chart only plots the players who reached the fourth round (16). Like most decisions involved in data analysis, this choice has both desirable and undesirable effects. I like it because it thins out the data. The chart would have appeared more cluttered otherwise, in a negative way.

The removal of players eliminated in the early rounds limits the conclusion that one can draw from the chart. We are tempted to generalize the finding, saying that the average men’s player has increased in age – that was what I said in the first paragraph. Thinking about that for a second, I am not so sure the general statement is valid.

The overall field might have gone younger or not grown older, even as the older players assert their presence in the tournament. (This article provides side evidence that the conjecture might be true: the author looked at the average age of players in the top 100 ATP ranking versus top 1000, and learned that the average age of the top 1000 has barely shifted while the top 100 players have definitely grown older.)

So kudos to these reporters for writing a careful headline that stays true to the analysis.

I also found this video at FT that discussed the chart.

***

This chart about Wimbledon players hits the Trifecta. It has an interesting – to some, surprising – message (Q). It demonstrates thoughtful processing and analysis of the data (D). And the visual design fits well with its intended message (V). (For a comprehensive guide to the Trifecta Checkup, see here.)