Hope and reality in one Georgia chart

Over the weekend, Georgia's State Health Department agitated a lot of people when it published the following chart:

Georgia_top5counties_covid19

(This might have appeared a week ago as the last date on the chart is May 9 and the title refers to "past 15 days".)

They could have avoided the embarrassment if they had read my article at DataJournalism.com (link). In that article, I lay out a set of the "unspoken conventions," things that visual designers are, or should be, doing more or less in their sleep. Under the section titled "Order", I explain the following two "rules":

  • Place values in the natural order when it is available
  • Retain the same order across all plots in a panel of charts

In the chart above, the natural order for the horizontal (time) axis is time running left to right. The order chosen by the designer  is roughly but not precisely decreasing height of the tallest column in each daily group. Many observers suggested that the columns were arranged to give the appearance of cases dropping over time.

Within each day, the counties are ordered in decreasing number of new cases. The title of the chart reads "number of cases over time" which sounds like cumulative cases but it's not. The "lead" changed hands so many times over the 15 days, meaning the data sequence was extremely noisy, which would be unlikely for cumulative cases. There are thousands of cases in each of these counties by May. Switching the order of the columns within each daily group defeats the purpose of placing these groups side-by-side.

Responding to the bad press, the department changed the chart design for this week's version:

Georgia_top5counties_covid19_revised

This chart now conforms to the two spoken rules described above. The time axis runs left to right, and within each group of columns, the order of the counties is maintained.

The chart is still very noisy, with no apparent message.

***

Next, I'd like to draw your attention to a Data issue. Notice that the 15-day window has shifted. This revised chart runs from May 2 to May 16, which is this past Saturday. The previous chart ran from Apr 26 to May 9. 

Here's the data for May 8 and 9 placed side by side.

Junkcharts_georgia_covid19_cases

There is a clear time lag of reporting cases in the State of Georgia. This chart should always exclude the last few days. The case counts keep going up until it stabilizes. The same mistake occurs in the revised chart - the last two days appear as if new cases have dwindled toward zero when in fact, it reflects a lag in reporting.

The disconnect between the Question being posed and the quality of the Data available dooms this visualization. It is not possible to provide a reliable assessment of the "past 15 days" when during perhaps half of that period, the cases are under-counted.

***

Nyt_tryingtobefashionableThis graphical distortion due to "immature" data has become very commonplace in Covid-19 graphics. It's similar to placing partial-year data next to full-year results, without calling out the partial data.

The following post from the ancient past (2005!) about a New York Times graphic shows that calling out this data problem does not actually solve it. It's a less-bad kind of thing.

The coronavirus data present more headaches for graphic designers than the financial statistics. Because of accounting regulations, we know that only the current quarter's data are immature. For Covid-19 reporting, the numbers are being adjusted for days and weeks.

Practically all immature counts are under-estimates. Over time, more cases are reported. Thus, any plots over time - if unadjusted - paint a misleading picture of declining counts. The effect of the reporting lag is predictable, having a larger impact as we run from left to right in time. Thus, even if the most recent data show a downward trend, it can eventually mean anything: down, flat or up. This is not random noise though - we know for certain of the downward bias; we just don't know the magnitude of the distortion for a while.

Another issue that concerns coronavirus reporting but not financial reporting is inconsistent standards across counties. Within a business, if one were to break out statistics by county, the analysts would naturally apply the same counting rules. For Covid-19 data, each county follows its own set of rules, not just  how to count things but also how to conduct testing, and so on.

Finally, with the politics of re-opening, I find it hard to trust the data. Reported cases are human-driven data - by changing the number of tests, by testing different mixes of people, by delaying reporting, by timing the revision of older data, by explicit manipulation, ...., the numbers can be tortured into any shape. That's why it is extremely important that the bean-counters are civil servants, and that politicians are kept away. In the current political environment, that separation between politics and statistics has been breached.

***

Why do we have low-quality data? Human decisions, frequently political decisions, adulterate the data. Epidemiologists are then forced to use the bad data, because that's what they have. Bad data lead to bad predictions and bad decisions, or if the scientists account for the low quality, predictions with high levels of uncertainty. Then, the politicians complain that predictions are wrong, or too wide-ranging to be useful. If they really cared about those predictions, they could start by being more transparent about reporting and more proactive at discovering and removing bad accounting practices. The fact that they aren't focused on improving the data gives the game away. Here's a recent post on the politics of data.

 


Twitter people UpSet with that Covid symptoms diagram

Been busy with an exciting project, which I might talk about one day. But I promised some people I'll follow up on Covid symptoms data visualization, so here it is.

After I posted about the Venn diagram used to depict self-reported Covid-19 symptoms by users of the Covid Symptom Tracker app (reported by Nature), Xan and a few others alerted me to Twitter discussion about alternative visualizations that people have made after they suffered the indignity of trying to parse the Venn diagram.

To avoid triggering post-trauma, for those want to view the Venn diagram, please click here.

[In the Twitter links below, you almost always have to scroll one message down - saving tweets, linking to tweets, etc. are all stuff I haven't fully figured out.]

Start with the Questions

Xan’s final comment is especially appropriate: "There's an over-riding Type-Q issue: count charts answer the wrong question".

As dataviz designers, we frequently get locked into the mindset of “what is the best way to present this dataset?” This line of thinking leads to overloaded graphics that attempt to answer every possible question that may arise from the data in one panoptic chart, akin to juggling 10 balls at once.

For complex datasets, it is often helpful to narrow down the list of questions, and provide a series of charts, each addressing one or two questions. I’ll come back to this point. I want to first show some of the nicer visuals that others have produced, which brings out the structure and complexity of this dataset.

 

The UpSet chart

The primary contender is the “UpSet” chart form, as best exemplified by Bart’s effort

Upset_bartjutte

The centerpiece of this chart is the matrix of dots. The horizontal rows of dots represent the presence of specific symptoms such as cough and anosmia (loss of smell and taste). The vertical columns are intuitive, once you get it. They represent combinations of symptoms, and the fill/no-fill of the dots indicates which symptoms are being combined. For example, the first column counts people reporting fatigue plus anosmia (but nothing else).

The UpSet chart clearly communicates the structure of the data. In many survey questions (including this one conducted by the Symptom Tracker app), respondents are allowed to check/tick more than one answer choices. This creates a situation where the number of answers (here, symptoms) per respondent can be zero up to the total number of answer choices.

So far, we have built a structure like we have drawn country outlines on a map. There is no data yet. The data are primarily found in the sidebar histograms (column/bar charts). Reading horizontally to the right side, one learns that the most frequently reported symptom was fatigue, covering 88 percent of the users.* Reading vertically, one learns that the top combination of symptoms was fatigue plus anosmia, covering 16 percent of users.

***

Now come the divisive acts.

Act 1: Bart orders the columns in a particular way that meets his subjective view of how he wants readers to see the data. The columns are sorted from the most frequent combinations to the least. The histogram has a “long tail”, with most of the combinations receiving a small proportion of the total. The top five combinations is where the bulk of the data is – I’d have liked to see all five columns labeled, without decimal places.

This is a choice on the part of the designer. Nils, for example, made two versions of his UpSet charts. The second version arranges the combinations from singles to quintuples.

Nils Gehlenborg_upsetplot_sortedbynumberofsymptoms

 

Digression: The Visual in Data Visualization

The two rendering of “UpSet” charts, by Nils and Bart, is a perfect illustration of the Trifecta Checkup framework. Each corner of the Trifecta is an independent dimension, and yet all must sync. With the same data and the same question types, what differentiates the two versions is the visual design.

See how many differences you can find, and make your own design choices!

 

I place the digression here because Act 1 above has to do with the Q corner, and both visual designs can accommodate the sorting decisions. But Act 2 below pertains to the V corner.

Act 2: Bart applies a blue gradient to the matrix of dots that reinforces his subjective view about identifying frequent combinations of symptoms. Nils, by contrast, uses the matrix to show present/absent only.

I’m not sure about Act 2. I think the addition of the color gradient overloads the matrix in the chart. It has the nice effect of focusing the reader’s attention on the top 5 combinations but it also requires the reader to have understood the meaning of columns first. Perhaps applying the gradient to the histogram up top rather than the dots in the matrix can achieve the same goal with less confusion.

 

Getting Obtuse

For example, some readers (e.g. Robin) expressed confusion.

Robin is alleging something the chart doesn’t do. He pointed out (correctly) that while 16 percent experienced fatigue and anosmia only (without other symptoms), more than 50 percent reported fatigue and anosmia, plus other symptoms. That nugget of information is deeply buried inside Bart’s chart – it’s the sum of each column for which the first two dots are filled in. For example, the second column represents fatigue+anosmia+cough. So Robin wants to aggregate those up.

Robin’s critique arises from the Q(uestion) corner. If the designer wants to highlight specific combinations that occur most frequently in the data, then Bart’s encoding makes perfect sense. On the other hand, if the purpose is to highlight pairs of symptoms that occur most frequently together (disregarding symptoms outside each pair), then the data must be further aggregated. The switch in the Question requires more Data manipulation, which then affects the Visualization. That's the essence of the Trifecta Checkup framework.

Rest assured, the version that addresses Robin’s point will not give an easy answer to Bart’s question. In fact, Xan whipped up a bar chart in response:

Xan_symptomscombo_barchart

This is actually hard to comprehend because Robin’s question is even hard to state. The first bar shows 87 percent of users reported fatigue as a symptom, the same number that appeared on Bart’s version on the right side. Then, the darkened section of the bar indicates the proportion of users who reported only fatigue and nothing else, which appears to be about 10 percent. So 1 out of 9 reported just fatigue while 8 out of 9 who reported fatigue also experienced other symptoms.

 

Xan’s bar chart can be flipped 90 degrees and replace Bart’s histogram on top of the matrix. But you see, we end up with the same problem as I mentioned up top. By jamming more insights from more questions onto the same chart, we risk dropping the other balls that were already in the air.

So, my advice is always to first winnow down the list of questions you want to address. And don’t be afraid of making a series of charts instead of one panoptic chart.

***

Act 3: Bart decides to leave out labels for the columns.

This is a curious choice given the key storyline we’ve been working with so far (the Top 5 combinations of symptoms). But notice how annoying this problem is. Combinations require long text, which must be written vertically or slanted on this design. Transposing could help but not really. It’s just a limitation of this chart form. For me, reading the filled dots underneath the columns as column labels isn’t a show-stopper.

 

Histograms vs Bar Charts

It’s worth pointing out that the sidebar “histograms” are not both histograms. I tend to think of histograms as a specific type of bar (column) chart, in which the sum of the bars (columns) can be interpreted as a whole. So all histograms are bar charts but only some bar charts are histograms.

The column chart up top is a histogram. The combinations of symptoms are disjoint, and the total of the combinations should be the total number of answer choices selected by all respondents. The bar chart on the right side however is not a histogram. Each percentage is a proportion to the whole, and adding those percentages yields way above 100%.

I like the annotation on Bart’s chart a lot. They are succinct and they give just the right information to explain how to read the chart.

 

Limitations

I already mentioned the vertical labeling issue for UpSet charts. Here are two other considerations for you.

The majority of the plotting area is dedicated to the matrix of dots. The matrix contains merely labels for data. They are like country boundaries on a map. While it lays out the structure of data very clearly, the designer should ask whether it is essential for the readers to see the entire landscape.

In real-world data, the “long tail” phenomenon we saw earlier is very common. With six featured symptoms, there are 2^6 = 64 possible combinations of symptoms (minus 1 if they filtered out those not reporting symptoms*), almost all of which will be empty. Should the low-frequency columns be removed? This is not as controversial as you think, because implicitly both Bart and Nils already dropped all empty combinations!

 

Data and Code

Kieran Healy left a comment on the last post, and you can find both the data (thank you!) and some R code for UpSet charts at his blog.

Also, Nils has a Shiny app on Github.

 

(*) One must be very careful about what “users” are being represented. They form a tiny subset of users of the Symptom Tracker app, just those who have previously taken a diagnostic test and have self-reported at least one symptom. I have separately commented on the analyses of this dataset by the team behind the app. The first post discusses their analytical methods, the second post examines how they pre-processed the data, and a future post will describe the data collection practices. For the purpose of this blog post, I’ll ignore any data issues.

(#) Bart’s chart is conceptual because some of the columns of dots are repeated, and there is one column without fills, which should have been removed by a pre-processing step applied by the research team.


Make your color legend better with one simple rule

The pie chart about COVID-19 worries illustrates why we should follow a basic rule of constructing color legends: order the categories in the way you expect readers to encounter them.

Here is the chart that I discussed the other day, with the data removed since they are not of concern in this post. (link)

Junkcharts_abccovidbiggestworries_sufficiency

First look at the pie chart. Like me, you probably looked at the orange or the yellow slice first, then we move clockwise around the pie.

Notice that the legend leads with the red square ("Getting It"), which is likely the last item you'll see on the chart.

This is the same chart with the legend re-ordered:

Redo_junkcharts_abcbiggestcovidworries_legend

***

Simple charts can be made better if we follow basic rules of construction. When used frequently, these rules can be made silent. I cover rules for legends as well as many other rules in this Long Read article titled "The Unspoken Conventions of Data Visualization" (link).


Pie chart conventions

I came across this pie chart from a presentation at an industry meeting some weeks ago:

Mediaconversations_orig

This example breaks a number of the unspoken conventions on making pie charts and so it is harder to read than usual.

Notice that the biggest slice starts around 8 o'clock, and the slices are ordered alphabetically by the label, rather than numerically by size of the slice.

The following is the same chart ordered in a more conventional way. The largest slice is placed along the top vertical, and the other slices are arranged in a clock-wise manner from larger to smaller.

Redo_junkcharts_mediaconversations1

This version is easier to read because the reader does not need to think about the order of the slices. The expectation of decreasing size is met.

The above pie chart, though, reveals breaking of another convention. The colors on this chart signify nothing! The general rule is color differences should encode data differences. Here, the colors should go from deepest to lightest. (One can even argue that different tinges is redundant.)

Redo_junkcharts_mediaconversations2

You see how this version is even better. In the previous version, the colors are distracting. You're wondering what they mean, and then you realize they signify nothing.

***

As designers of graphics, we follow a bunch of conventions silently. When a design deviates from it, it's harder to understand.

Recently, I wrote a long article for DataJournalism.com, setting out many of these unspoken conventions. Read it here.

 


Gazing at petals

Reader Murphy pointed me to the following infographic developed by Altmetric to explain their analytics of citations of journal papers. These metrics are alternative in that they arise from non-academic media sources, such as news outlets, blogs, twitter, and reddit.

The key graphic is the petal diagram with a number in the middle.

Altmetric_tetanus

I have a hard time thinking of this object as “data visualization”. Data visualization should visualize the data. Here, the connection between the data and the visual design is tenuous.

There are eight petals arranged around the circle. The legend below the diagram maps the color of each petal to a source of data. Red, for example, represents mentions in news outlets, and green represents mentions in videos.

Each petal is the same size, even though the counts given below differ. So, the petals are like a duplicative legend.

The order of the colors around the circle does not align with its order in the table below, for a mysterious reason.

Then comes another puzzle. The bluish-gray petal appears three times in the diagram. This color is mapped to tweets. Does the number of petals represent the much higher counts of tweets compared to other mentions?

To confirm, I pulled up the graphic for a different paper.

Altmetric_worldwidedeclineofentomofauna

Here, each petal has a different color. Eight petals, eight colors. The count of tweets is still much larger than the frequencies of the other sources. So, the rule of construction appears to be one petal for each relevant data source, and if the total number of data sources fall below eight, then let Twitter claim all the unclaimed petals.

A third sample paper confirms this rule:

Altmetric_dnananodevices

None of the places we were hoping to find data – size of petals, color of petals, number of petals – actually contain any data. Anything the reader wants to learn can be directly read. The “score” that reflects the aggregate “importance” of the corresponding paper is found at the center of the circle. The legend provides the raw data.

***

Some years ago, one of my NYU students worked on a project relating to paper citations. He eventually presented the work at a conference. I featured it previously.

Michaelbales_citationimpact

Notice how the visual design provides context for interpretation – by placing each paper/researcher among its peers, and by using a relative scale (percentiles).

***

I’m ignoring the D corner of the Trifecta Checkup in this post. For any visualization to be meaningful, the data must be meaningful. The type of counting used by Altmetric treats every tweet, every mention, etc. as a tally, making everything worth the same. A mention on CNN counts as much as a mention by a pseudonymous redditor. A pan is the same as a rave. Let’s not forget the fake data menace (link), which  affects all performance metrics.


Bubble charts, ratios and proportionality

A recent article in the Wall Street Journal about a challenger to the dominant weedkiller, Roundup, contains a nice selection of graphics. (Dicamba is the up-and-comer.)

Wsj_roundup_img1


The change in usage of three brands of weedkillers is rendered as a small-multiples of choropleth maps. This graphic displays geographical and time changes simultaneously.

The staircase chart shows weeds have become resistant to Roundup over time. This is considered a weakness in the Roundup business.

***

In this post, my focus is on the chart at the bottom, which shows complaints about Dicamba by state in 2019. This is a bubble chart, with the bubbles sorted along the horizontal axis by the acreage of farmland by state.

Wsj_roundup_img2

Below left is a more standard version of such a chart, in which the bubbles are allowed to overlap. (I only included the bubbles that were labeled in the original chart).

Redo_roundupwsj0

The WSJ’s twist is to use the vertical spacing to avoid overlapping bubbles. The vertical axis serves a design perogative and does not encode data.  

I’m going to stick with the more traditional overlapping bubbles here – I’m getting to a different matter.

***

The question being addressed by this chart is: which states have the most serious Dicamba problem, as revealed by the frequency of complaints? The designer recognizes that the amount of farmland matters. One should expect the more acres, the more complaints.

Let's consider computing directly the number of complaints per million acres.

The resulting chart (shown below right) – while retaining the design – gives a wholly different feeling. Arkansas now owns the largest bubble even though it has the least acreage among the included states. The huge Illinois bubble is still large but is no longer a loner.

Redo_dicambacomplaints1

Now return to the original design for a moment (the chart on the left). In theory, this should work in the following manner: if complaints grow purely as a function of acreage, then the bubbles should grow proportionally from left to right. The trouble is that proportional areas are not as easily detected as proportional lengths.

The pair of charts below depict made-up data in which all states have 30 complaints for each million acres of farmland. It’s not intuitive that the bubbles on the left chart are growing proportionally.

Redo_dicambacomplaints2

Now if you look at the right chart, which shows the relative metric of complaints per million acres, it’s impossible not to notice that all bubbles are the same size.


Taking small steps to bring out the message

Happy new year! Good luck and best wishes!

***

We'll start 2020 with something lighter. On a recent flight, I saw a chart in The Economist that shows the proportion of operating income derived from overseas markets by major grocery chains - the headline said that some of these chains are withdrawing from international markets.

Econ_internationalgroceries_sm

The designer used one color for each grocery chain, and two shades within each color. The legend describes the shades as "total" and "of which: overseas". As with all stacked bar charts, it's a bit confusing where to find the data. The "total" is actually the entire bar, not just the darker shaded part. The darker shaded part is better labeled "home market" as shown below:

Redo_econgroceriesintl_1

The designer's instinct to bring out the importance of international markets to each company's income is well placed. A second small edit helps: plot the international income amounts first, so they line up with the vertical zero axis. Like this:

Redo_econgroceriesintl_2

This is essentially the same chart. The order of international and home market is reversed. I also reversed the shading, so that the international share of income is displayed darker. This shading draws the readers' attention to the key message of the chart.

A stacked bar chart of the absolute dollar amounts is not ideal for showing proportions, because each bar is a different length. Sometimes, plotting relative values summing to 100% for each company may work better.

As it stands, the chart above calls attention to a different message: that Walmart dwarfs the other three global chains. Just the international income of Walmart is larger than the total income of Costco.

***

Please comment below or write me directly if you have ideas for this blog as we enter a new decade. What do you want to see more of? less of?


How to read this cost-benefit chart, and why it is so confusing

Long-time reader Antonio R. found today's chart hard to follow, and he isn't alone. It took two of us multiple emails and some Web searching before we think we "got it".

Ar_submit_Fig-3-2-The-policy-cost-curve-525

 

Antonio first encountered the chart in a book review (link) of Hal Harvey et. al, Designing Climate Solutions. It addresses the general topic of costs and benefits of various programs to abate CO2 emissions. The reviewer praised the "wealth of graphics [in the book] which present complex information in visually effective formats." He presented the above chart as evidence, and described its function as:

policy-makers can focus on the areas which make the most difference in emissions, while also being mindful of the cost issues that can be so important in getting political buy-in.

(This description is much more informative than the original chart title, which states "The policy cost curve shows the cost-effectiveness and emission reduction potential of different policies.")

Spend a little time with the chart now before you read the discussion below.

Warning: this is a long read but well worth it.

 

***

 

If your experience is anything like ours, scraps of information flew at you from different parts of the chart, and you had a hard time piecing together a story.

What are the reasons why this data graphic is so confusing?

Everyone recognizes that this is a column chart. For a column chart, we interpret the heights of the columns so we look first at the vertical axis. The axis title informs us that the height represents "cost effectiveness" measured in dollars per million metric tons of CO2. In a cost-benefit sense, that appears to mean the cost to society of obtaining the benefit of reducing CO2 by a given amount.

That's how far I went before hitting the first roadblock.

For environmental policies, opponents frequently object to the high price of implementation. For example, we can't have higher fuel efficiency in cars because it would raise the price of gasoline too much. Asking about cost-effectiveness makes sense: a cost-benefit trade-off analysis encapsulates the something-for-something principle. What doesn't follow is that the vertical scale sinks far into the negative. The chart depicts the majority of the emissions abatement programs as having negative cost effectiveness.

What does it mean to be negatively cost-effective? Does it mean society saves money (makes a profit) while also reducing CO2 emissions? Wouldn't those policies - more than half of the programs shown - be slam dunks? Who can object to programs that improve the environment at no cost?

I tabled that thought, and proceeded to the horizontal axis.

I noticed that this isn't a standard column chart, in which the width of the columns is fixed and uneventful. Here, the widths of the columns are varying.

***

In the meantime, my eyes are distracted by the constellation of text labels. The viewing area of this column chart is occupied - at least 50% - by text. These labels tell me that each column represents a program to reduce CO2 emissions.

The dominance of text labels is a feature of this design. For a conventional column chart, the labels are situated below each column. Since the width does not usually carry any data, we tend to keep the columns narrow - Tufte, ever the minimalist, has even advocated reducing columns to vertical lines. That leaves insufficient room for long labels. Have you noticed that government programs hold long titles? It's tough to capture even the outline of a program with fewer than three big words, e.g. "Renewable Portfolio Standard" (what?).

The design solution here is to let the column labels run horizontally. So the graphical element for each program is a vertical column coupled with a horizontal label that invades the territories of the next few programs. Like this:

Redo_fueleconomystandardscars

The horror of this design constraint is fully realized in the following chart, a similar design produced for the state of Oregon (lifted from the Plan Washington webpage listed as a resource below):

Figure 2 oregon greenhouse

In a re-design, horizontal labeling should be a priority.

 

***

Realizing that I've been distracted by the text labels, back to the horizontal axis I went.

This is where I encountered the next roadblock.

The axis title says "Average Annual Emissions Abatement" measured in millions metric tons. The unit matches the second part of the vertical scale, which is comforting. But how does one reconcile the widths of columns with a continuous scale? I was expecting each program to have a projected annual abatement benefit, and those would fall as dots on a line, like this:

Redo_abatement_benefit_dotplot

Instead, we have line segments sitting on a line, like this:

Redo_abatement_benefit_bars_end2end_annuallabel

Think of these bars as the bottom edges of the columns. These line segments can be better compared to each other if structured as a bar chart:

Redo_abatement_benefit_bars

Instead, the design arranges these lines end-to-end.

To unravel this mystery, we go back to the objective of the chart, as announced by the book reviewer. Here it is again:

policy-makers can focus on the areas which make the most difference in emissions, while also being mindful of the cost issues that can be so important in getting political buy-in.

The primary goal of the chart is a decision-making tool for policy-makers who are evaluating programs. Each program has a cost and also a benefit. The cost is shown on the vertical axis and the benefit is shown on the horizontal. The decision-maker will select some subset of these programs based on the cost-benefit analysis. That subset of programs will have a projected total expected benefit (CO2 abatement) and a projected total cost.

By stacking the line segments end to end on top of the horizontal axis, the chart designer elevates the task of computing the total benefits of a subset of programs, relative to the task of learning the benefits of any individual program. Thus, the horizontal axis is better labeled "Cumulative annual emissions abatement".

 

Look at that axis again. Imagine you are required to learn the specific benefit of program titled "Fuel Economy Standards: Cars & SUVs".  

Redo_abatement_benefit_bars_end2end_cumlabel

This is impossible to do without pulling out a ruler and a calculator. What the axis labels do tell us is that if all the programs to the left of Fuel Economy Standards: Cars & SUVs were adopted, the cumulative benefits would be 285 million metric tons of CO2 per year. And if Fuel Economy Standards: Cars & SUVs were also implemented, the cumulative benefits would rise to 375 million metric tons.

***

At long last, we have arrived at a reasonable interpretation of the cost-benefit chart.

Policy-makers are considering throwing their support behind specific programs aimed at abating CO2 emissions. Different organizations have come up with different ways to achieve this goal. This goal may even have specific benchmarks; the government may have committed to an international agreement, for example, to reduce emissions by some set amount by 2030. Each candidate abatement program is evaluated on both cost and benefit dimensions. Benefit is given by the amount of CO2 abated. Cost is measured as a "marginal cost," the amount of dollars required to achieve each million metric ton of abatement.

This "marginal abatement cost curve" aids the decision-making. It lines up the programs from the most cost-effective to the least cost-effective. The decision-maker is presumed to prefer a more cost-effective program than a less cost-effective program. The chart answers the following question: for any given subset of programs (so long as we select them left to right contiguously), we can read off the cumulative amount of CO2 abated.

***

There are still more limitations of the chart design.

  • We can't directly read off the cumulative cost of the selected subset of programs because the vertical axis is not cumulative. The cumulative cost turns out to be the total area of all the columns that correspond to the selected programs. (Area is height x width, which is cost per benefit multiplied by benefit, which leaves us with the cost.) Unfortunately, it takes rulers and calculators to compute this total area.

  • We have presumed that policy-makers will make the Go-No-go decision based on cost effectiveness alone. This point of view has already been contradicted. Remember the mystery around negatively cost-effective programs - their existence shows that some programs are stalled even when they reduce emissions in addition to making money!

  • Since many, if not most, programs have negative cost-effectiveness (by the way they measured it), I'd flip the metric over and call it profitability (or return on investment). Doing so removes another barrier to our understanding. With the current cost-effectiveness metric, policy-makers are selecting the "negative" programs before the "positive" programs. It makes more sense to select the "positive" programs before the "negative" ones!

***

In a Trifecta Checkup (guide), I rate this chart Type V. The chart has a great purpose, and the design reveals a keen sense of the decision-making process. It's not a data dump for sure. In addition, an impressive amount of data gathering and analysis - and synthesis - went into preparing the two data series required to construct the chart. (Sure, for something so subjective and speculative, the analysis methodology will inevitably be challenged by wonks.) Those two data series are reasonable measures for the stated purpose of the chart.

The chart form, though, has various shortcomings, as shown here.  

***

In our email exchange, Antonio and I found the Plan Washington website useful. This is where we learned that this chart is called the marginal abatement cost curve.

Also, the consulting firm McKinsey is responsible for popularizing this chart form. They have published this long report that explains even more of the analysis behind constructing this chart, for those who want further details.


Light entertainment: people of color

What colors do the "average" person like the most and the least? The following chart found here (Scott Design) tells you favorite and least favorite colors by age groups:

Color-preferences-by-age

(This is one of a series of charts. A total of 10 colors is covered by the survey. The same color can appear in both favorites and least favorites since these are aggregate proportions. Almost 40% of the respondents are under 18 and only one percent are over 70.)

Here's one item that has stumped me thus far: how are the colors ordered within each figurine?


Pulling the multi-national story out, step by step

Reader Aleksander B. found this Economist chart difficult to understand.

Redo_multinat_1

Given the chart title, the reader is looking for a story about multinationals producing lower return on equity than local firms. The first item displayed indicates that multinationals out-performed local firms in the technology sector.

The pie charts on the right column provide additional information about the share of each sector by the type of firms. Is there a correlation between the share of multinationals, and their performance differential relative to local firms?

***

We can clean up the presentation. The first changes include using dots in place of pipes, removing the vertical gridlines, and pushing the zero line to the background:

Redo_multinat_2

The horizontal gridlines attached to the zero line can also be removed:

Redo_multinat_3

Now, we re-order the rows. Start with the aggregate "All sectors". Then, order sectors from the largest under-performance by multinationals to the smallest.

Redo_multinat_4

The pie charts focus only on the share of multinationals. Taking away the remainders speeds up our perception:

Redo_multinat_5

Help the reader understand the data by dividing the sectors into groups, organized by the performance differential:

Redo_multinat_6

For what it's worth, re-sort the sectors from largest to smallest share of multinationals:

Redo_multinat_7

Having created groups of sectors by share of multinationals, I simplify further by showing the average pie chart within each group:

Redo_multinat_8

***

To recap all the edits, here is an animated gif: (if it doesn't play automatically, click on it)

Redo_junkcharts_econmultinat

***

Judging from the last graphic, I am not sure there is much correlation between share of multinationals and the performance differentials. It's interesting that in aggregate, local firms and multinationals performed the same. The average hides the variability by sector: in some sectors, local firms out-performed multinationals, as the original chart title asserted.