Dual axes: a favorite of tricksters

Twitter readers directed me to this abomination from the St. Louis Fed (link).


This chart is designed to paint the picture that China is this grave threat because it's been ramping up military expenditure so much so that it exceeded U.S. spending since the 2000s.

Sadly, this is not what the data are suggesting at all! This story is constructed by manipulating the dual axes. Someone has already fixed it. Here's the same data plotted with a single axis:


(There are two set of axis labels but they have the same scale and both start at zero, so there is only one axis.)

Certainly, China has been ramping up military spending. Nevertheless, China's current level of spending is about one-third of America's. Also, imagine the cumulative spending excess over the 30 years shown on the chart.

Note also, the growth line of U.S. military spending in this period is actually similarly steep as China's.


Apparently, the St. Louis Fed is intent on misleading its readers. Even though on Twitter, they acknowledged people's feedback, they decided not to alter the chart.


If you click through to the article, you'll find the same flawed chart as before so I'm not sure how they "listened". I went to Wayback Machine to check the first version of this page, and I notice no difference.


If one must make a dual axes chart, it is the responsibility of the chart designer to make it clear to readers that different lines on the chart use different axes. In this case, since the only line that uses the right hand side axis is the U.S. line, which is blue, they should have colored the right hand axis blue. Doing that does not solve the visualization problem; it merely reduces the chance of not noticing the dual axes.


I have written about dual axes a lot in the past. Here's a McKinsey chart from 2006 that offends.

Painting the corner

Found an old one sitting in my folder. This came from the Wall Street Journal in 2018.

At first glance, the chart looks like a pretty decent effort.

The scatter plot shows Ebitda against market value, both measured in billions of dollars. The placement of the vertical axis title on the far side is a little unusual.

Ebitda is a measure of business profit (something for a different post on the sister blog: the "b" in Ebitda means "before", and allows management to paint a picture of profits without accounting for the entire cost of running the business). In the financial markets, the market value is claimed to represent a "fair" assessment of the value of the business. The ratio of the market value to Ebitda is known as the "Ebitda multiple", which describes the number of dollars the "market" places on each dollar of Ebitda profit earned by the company.

Almost all scatter plots suffer from xyopia: the chart form encourages readers to take an overly simplistic view in which the market cares about one and only one business metric (Ebitda). The reality is that the market value contains information about Ebitda plus lots of other factors, such as competitors, growth potential, etc.

Consider Alphabet vs AT&T. On this chart, both companies have about $50 billion in Ebitda profits. However, the market value of Alphabet (Google's mother company) is about four times higher than that of AT&T. This excess valuation has nothing to do with profitability but partly explained by the market's view that Google has greater growth potential.


Unusually, the desginer chose not to utilize the log scale. The right side of the following display is the same chart with a log horizontal axis.

The big market values are artificially pulled into the middle while the small values are plied apart. As one reads from left to right, the same amount of distance represents more and more dollars. While all data visualization books love log scales, I am not a big fan of it. That's because the human brain doesn't process spatial information this way. We don't tend to think in terms of continuously evolving scales. Thus, presenting the log view causes readers to underestimate large values and overestimate small differences.

Now let's get to the main interest of this chart. Notice the bar chart shown on the top right, which by itself is very strange. The colors of the bar chart is coordinated with those on the scatter plot, as the colors divide the companies into two groups; "media" companies (old, red), and tech companies (new, orange).

Scratch that. Netflix is found in the scatter plot but with a red color while AT&T and Verizon appear on the scatter plot as orange dots. So it appears that the colors mean different things on different plots. As far as I could tell, on the scatter plot, the orange dots are companies with over $30 billion in Ebitda profits.

At this point, you may have noticed the stray orange dot. Look carefully at the top right corner, above the bar chart, and you'll find the orange dot representing Apple. It is by far the most important datum, the company that has the greatest market value and the largest Ebitda.

I'm not sure burying Apple in the corner was a feature or a bug. It really makes little sense to insert the bar chart where it is, creating a gulf between Apple and the rest of the companies. This placement draws the most attention away from the datum that demands the most attention.




Finding the right context to interpret household energy data

Bloomberg_energybillBloomberg's recent article on surging UK household energy costs, projected over this winter, contains data about which I have long been intrigued: how much energy does different household items consume?

A twitter follower alerted me to this chart, and she found it informative.

If the goal is to pick out the appliances and estimate the cost of running them, the chart serves its purpose. Because the entire set of data is printed, a data table would have done equally well.

I learned that the mobile phone costs almost nothing to charge: 1 pence for six hours of charging, which is deemed a "single use" which seems double what a full charge requires. The games console costs 14 pence for a "single use" of two hours. That might be an underestimate of how much time gamers spend gaming each day.


Understanding the design of the chart needs a bit more effort. Each appliance is measured by two metrics: the number of hours considered to be "single use", and a currency value.

It took me a while to figure out how to interpret these currency values. Each cost is associated with a single use, and the duration of a single use increases as we move down the list of appliances. Since the designer assumes a fixed cost of electicity (shown in the footnote as 34p per kWh), at first, it seems like the costs should just increase from top to bottom. That's not the case, though.

Something else is driving these numbers behind the scene, namely, the intensity of energy use by appliance. The wifi router listed at the bottom is turned on 24 hours a day, and the daily cost of running it is just 6p. Meanwhile, running the fridge and freezer the whole day costs 41p. Thus, the fridge&freezer consumes electricity at a rate that is almost 7 times higher than the router.

The chart uses a split axis, which artificially reduces the gap between 8 hours and 24 hours. Here is another look at the bottom of the chart:



Let's examine the choice of "single use" as a common basis for comparing appliances. Consider this:

  • Continuous appliances (wifi router, refrigerator, etc.) are denoted as 24 hours, so a daily time window is also implied
  • Repeated-use appliances (e.g. coffee maker, kettle) may be run multiple times a day
  • Infrequent use appliances may be used less than once a day

I prefer standardizing to a "per day" metric. If I use the microwave three times a day, the daily cost is 3 x 3p = 9 p, which is more than I'd spend on the wifi router, run 24 hours. On the other hand, I use the washing machine once a week, so the frequency is 1/7, and the effective daily cost is 1/7 x 36 p = 5p, notably lower than using the microwave.

The choice of metric has key implications on the appearance of the chart. The bubble size encodes the relative energy costs. The biggest bubbles are in the heating category, which is no surprise. The next largest bubbles are tumble dryer, dishwasher, and electric oven. These are generally not used every day so the "per day" calculation would push them lower in rank.


Another noteworthy feature of the Bloomberg chart is the split legend. The colors divide appliances into five groups based on usage category (e.g. cleaning, food, utility). Instead of the usual color legend printed on a corner or side of the chart, the designer spreads the category labels around the chart. Each label is shown the first time a specific usage category appears on the chart. There is a presumption that the reader scans from top to bottom, which is probably true on average.

I like this arrangement as it delivers information to the reader when it's needed.




Modern design meets dataviz

This chart was submitted via Twitter (thanks John G.).


Perhaps the designer is inspired by this:


That's the Royal Ontario Museum, one of the beautiful landmarks in Toronto.


The chart addresses an interesting question - how much do home buyers over or under-estimate home value?  That said, gathering data to answer this question is challenging. I won't delve into this issue in this post.

Let's ask where readers are looking for data on the chart. It appears that we should use the right edge of each triangle. While the left edge of the red triangle might be useful, the left edges of the other triangles definitely would not contain data.

Note that, like modern architecture, the designer is playing with edges. None of the four right edges is properly vertical - none of the lines cuts the horizontal axis at a right angle. So the data actually reside in the imaginary vertical lines from the apexes to the horizontal baseline.

Where is the horizontal baseline? It's not where it is drawn either. The last number in the series is a negative number and so the real baseline is in the middle of the plot area, where the 0% value is.

The following chart shows (left side) the misleading signals sent to readers and (right side) the proper way to consume the data.


The degree of distortion is quite extreme. Only the fourth value is somewhat accurate, albeit by accident.

The design does not merely perturb the chart; it causes a severe adverse reaction.


P.S. [9/19/2022] Added submitter name.




Another reminder that aggregate trends hide information

The last time I looked at the U.S. employment situation, it was during the pandemic. The data revealed the deep flaws of the so-called "not in labor force" classification. This classification is used to dehumanize unemployed people who are declared "not in labor force," in which case they are neither employed nor unemployed -- just not counted at all in the official unemployment (or employment) statistics.

The reason given for such a designation was that some people just have no interest in working, or even looking for a job. Now they are not merely discouraged - as there is a category of those people. In theory, these people haven't been looking for a job for so long that they are no longer visible to the bean counters at the Bureau of Labor Statistics.

What happened when the pandemic precipitated a shutdown in many major cities across America? The number of "not in labor force" shot up instantly, literally within a few weeks. That makes a mockery of the reason for such a designation. See this post for more.


The data we saw last time was up to April, 2020. That's more than two years old.

So I have updated the charts to show what has happened in the last couple of years.

Here is the overall picture.


In this new version, I centered the chart at the 1990 data. The chart features two key drivers of the headline unemployment rate - the proportion of people designated "invisible", and the proportion of those who are considered "employed" who are "part-time" workers.

The last two recessions have caused structural changes to the labor market. From 1990 to late 2000s, which included the dot-com bust, these two metrics circulated within a small area of the chart. The Great Recession of late 2000s led to a huge jump in the proportion called "invisible". It also pushed the proportion of part-timers to all0time highs. The proportion of part-timers has fallen although it is hard to interpret from this chart alone - because if the newly invisible were previously part-time employed, then the same cause can be responsible for either trend.

_numbersense_bookcoverReaders of Numbersense (link) might be reminded of a trick used by school deans to pump up their US News rankings. Some schools accept lots of transfer students. This subpopulation is invisible to the US News statisticians since they do not factor into the rankings. The recent scandal at Columbia University also involves reclassifying students (see this post).

Zooming in on the last two years. It appears that the pandemic-related unemployment situation has reversed.


Let's split the data by gender.

American men have been stuck in a negative spiral since the 1990s. With each recession, a higher proportion of men are designated BLS invisibles.


In the grid system set up in this scatter plot, the top right corner is the worse of all worlds - the work force has shrunken and there are more part-timers among those counted as employed. The U.S. men are not exiting this quadrant any time soon.

What about the women?


If we compare 1990 with 2022, the story is not bad. The female work force is gradually reaching the same scale as in 1990 while the proportion of part-time workers have declined.

However, celebrating the above is to ignore the tremendous gains American women made in the 1990s and 2000s. In 1990, only 58% of women are considered part of the work force - the other 42% are not working but they are not counted as unemployed. By 2000, the female work force has expanded to include about 60% with similar proportions counted as part-time employed as in 1990. That's great news.

The Great Recession of the late 2000s changed that picture. Just like men, many women became invisible to BLS. The invisible proportion reached 44% in 2015 and have not returned to anywhere near the 2000 level. Fewer women are counted as part-time employed; as I said above, it's hard to tell whether this is because the women exiting the work force previously worked part-time.


The color of the dots in all charts are determined by the headline unemployment number. Blue represents low unemployment. During the 1990-2022 period, there are three moments in which unemployment is reported as 4 percent or lower. These charts are intended to show that an aggregate statistic hides a lot of information. The three times at which unemployment rate reached historic lows represent three very different situations, if one were to consider the sizes of the work force and the number of part-time workers.


P.S. [8-15-2022] Some more background about the visualization can be found in prior posts on the blog: here is the introduction, and here's one that breaks it down by race. Chapter 6 of Numbersense (link) gets into the details of how unemployment rate is computed, and the implications of the choices BLS made.

P.S. [8-16-2022] Corrected the axis title on the charts (see comment below). Also, added source of data label.

Metaphors give and take

Another submission came in from Euro Twitter. The following chart is probably from Germany:


As JB noted, this chart explains a financial pyramid scheme. I believe the numbers on the left are participants while the numbers on the right are the potential ill-gotten gains per person. The longer the pyramid scheme lasts, the more people participate, the more money flows to the top.

The pyramid is a natural metaphor for visualizing pyramid schemes. The levels of the pyramid correspond to levels of a pyramid scheme - the newly recruited participants expand the base while passing revenues up the pyramid.


The chart fails because it's not really a dataviz. There are exactly three bars that are scaled according to data. Everything else is presented as data labels.

Let's look at the two data series separately:


Each series is exponentially growing (in opposite directions). [Some of the data labels for participants may be incorrect.]

Unfortunately, the triangle is not a good medium to display exponential growth. In fact, the triangular structure imposes a linear growth constraint. The length of the base is directly proportional to the height from the top. As one traverses downwards level by level, the width of the base grows linearly - not exponentially.

To illustrate exponential growth, the edge of the triangle cannot be a straight line - it has to be s steep curve!


While natural, the pyramid metaphor is also severely restricting. The choice of chart form has unexpected consequences.


To explain or to eliminate, that is the question

Today, I take a look at another project from Ray Vella's class at NYU.

Rich Get Richer Assigment 2 top

(The above image is a honeypot for "smart" algorithms that don't know how to handle image dimensions which don't fit their shadow "requirement". Human beings should proceed to the full image below.)

As explained in this post, the students visualized data about regional average incomes in a selection of countries. It turns out that remarkable differences persist in regional income disparity between countries, almost all of which are more advanced economies.

Rich Get Richer Assigment 2 Danielle Curran_1

The graphic is by Danielle Curran.

I noticed two smart decisions.

First, she came up with a different main metric for gauging regional disparity, landing on a metric that is simple to grasp.

Based on hints given on the chart, I surmised that Danielle computed the change in per-capita income in the richest and poorest regions separately for each country between 2000 and 2015. These regional income growth values are expressed in currency, not indiced. Then, she computed the ratio of these growth rates, for each country. The end result is a simple metric for each country that describes how fast income has been growing in the richest region relative to the poorest region.

One of the challenges of this dataset is the complex indexing scheme (discussed here). Carlos' solution keeps the indices but uses design to facilitate comparisons. Danielle avoids the indices altogether.

The reader is relieved of the need to make comparisons, and so can focus on differences in magnitude. We see clearly that regional disparity is by far the highest in the U.K.


The second smart decision Danielle made is organizing the countries into clusters. She took advantage of the horizontal axis which does not encode any data. The branching structure places different clusters of countries along the axis, making it simple to navigate. The locations of these clusters are cleverly aligned to the map below.


Danielle's effort is stronger on communications while Carlos' effort provides more information. The key is to understand who your readers are. What proportion of your readers would want to know the values for each country, each region and each year?


A couple of suggestions

a) The reference line should be set at 1, not 0, for a ratio scale. The value of 1 happens when the richest region and the poorest region have identical per-capita incomes.

b) The vertical scale should be fixed.

Distorting perception versus distorting the data

This chart appears in the latest ("last print issue") of Schwab's On Investing magazine:


I know I don't like triangular charts, and in this post, I attempt to verbalize why.

It's not the usual complaint of distorting the data. When the base of the triangle is fixed, and only the height is varied, then the area is proportional to the height and thus nothing is distorted.

Nevertheless, my ability to compare those triangles pales in comparison to the following columns.


This phenomenon is not limited to triangles. One can take columns and start varying the width, and achieve a similar effect:


It's really the aspect ratio - the relationship between the height and the width that's the issue.


Interestingly, with an appropriately narrow base, even the triangular shape can be saved.


In a sense, we can think of the width of these shapes as noise, a distraction - because the width is constant, and not encoding any data.

It's like varying colors for no reason at all. It introduces a pointless dimension.


It may be prettier but the colors also interfere with our perception of the changing heights.

Stumped by the ATM

The neighborhood bank recently installed brand new ATMs, with tablet monitors and all that jazz. Then, I found myself staring at this screen:


I wanted to withdraw $100. I ordinarily love this banknote picker because I can get the $5, $10, $20 notes, instead of $50 and $100 that come out the slot when I don't specify my preference.

Something changed this time. I find myself wondering which row represents which note. For my non-U.S. readers, you may not know that all our notes are the same size and color. The screen resolution wasn't great and I had to squint really hard to see the numbers of those banknote images.

I suppose if I grew up here, I might be able to tell the note values from the figureheads. This is an example of a visualization that makes my life harder!

I imagine that the software developer might be a foreigner. I imagine the developer might live in Europe. In this case, the developer might have this image in his/her head:


Euro banknotes are heavily differentiated - by color, by image, by height and by width. The numeric value also occupies a larger proportion of the area. This makes a lot of sense.

I like designs to be adaptable. Switching data from one country to another should not alter the design. Switching data at different time scales should not affect the design. This banknote picker UI is not adaptable across countries.


Once I figured out the note values, I learned another reason why I couldn't tell which row is which note. It's because one note is absent.


Where is the $10 note? That and the twenty are probably the most frequently used. I am also surprised people want $1 notes from an ATM. But I assume the bank knows something I don't.

Tip of the day: transform data before plotting

The Financial Times called out a twitter user for some graphical mischief. Here are the two charts illustrating the plunge in Bitcoin's price last week : (Hat tip to Mark P.)


There are some big differences between the two charts. The left chart depicts this month's price actions, drawing attention to the last week while the right chart shows a longer period of time, starting from 2012. The author of the tweet apparently wanted to say that the recent drop is nothing to worry about. 

The Financial Times reporter noted another subtle difference - the right chart uses a log scale while the left chart is linear. Specifically, it's a log 2 scale, which means that each step up is double the previous number (1, 2, 4, 8, etc.). The effect is to make large changes look smaller. Presumably most readers fail to notice the scale. Even if they do, it's not natural to assign different differences to the same physical distances.



These price charts always miss the mark. That's because the current price is insufficient to capture whether a Bitcoin investor made money or lost money. If you purchased Bitcoins this month, you lost money. If your purchase was a year ago, you still made quite a bit of money despite the recent price plunge.

The following chart should not be read as a time series, even though the horizontal axis is time. Think date of Bitcoin purchase. This chart tells you how much $1 of Bitcoin is worth last week, based on what day the purchase was made.


People who bought this year have mostly been in the red. Those who purchased before October 2020 and held on are still very pleased with their decision.

This example illustrates that simple transformations of the raw data yield graphics that are much more informative.