Logging a sleight of hand

Andrew puts up an interesting chart submitted by one of his readers (link):

Gelman_overnightreturns_tsla

Bruce Knuteson who created this chart is pursuing a theory that there is some fishy going on in the stock markets over night (i.e. between the close of one day and the open of the next day). He split the price data into two interleaving parts: the blue line represents returns overnight and the green line represents returns intraday (from open of one day to the close of the same day). In this example related to Tesla's stock, the overnight "return" is an eyepopping 36850% while the intraday "return" is -46%.

This is an example of an average masking interesting details in the data. One typically looks at the entire sequence of values at once, while this analysis breaks it up into two subsequences. I'll write more about the data analysis at a later point. This post will be purely about the visualization.

***

It turns out that while the chart looks like a standard time series, it isn't. Bruce wrote out the following essential explanation:

Gelman_overnightreturns

The chart can't be interpreted without first reading this note.

The left chart (a) is the standard time-series chart we're thinking about. It plots the relative cumulative percentage change in the value of the investment over time. Imagine one buys $1 of Apple stock on day 1. It shows the cumulative return on day X, expressed as a percent relative to the initial investment amount. As mentioned above, the data series was split into two: the intraday return series (green) is dwarfed by the overnight return series (blue), and is barely visiable hugging the horizontal axis.

Almost without thinking, a graphics designer applies a log transform to the vertical axis. This has the effect of "taming" the extreme values in the blue line. This is the key design change in the middle chart (b). The other change is to switch back to absolute values. The day 1 number is now $1 so the day X number shows the cumulative value of the investment on day X if one started with $1 on day 1.

There's a reason why I emphasized the log transform over the switch to absolute values. That's because the relationship between absolute and relative values here is a linear one. If y(t) is the absolute cumulative value of $1 at time t, then the percent change r(t) = 100(y(t) -1). (Note that y(0) = 1 by definition.)  The shape of the middle chart is primarily conditioned by the log transform.

In the right chart (c), which is the design that Bruce features in all his work, the visual elements of chart (b) are retained while he replaced the vertical axis labels with those from chart (a). In other words, the lines show the cumulative absolute values while the labels show the relative cumulative percent returns.

I left this note on Gelman's blog (corrected a mislabeling of the chart indices):

I'm interested in the the sleight of hand related to the plots, also tying this back to the recent post about log scales. In plot (b) (a) [middle of the panel], he transformed the data to show the cumulative value of the investment assuming one puts $1 in the stock on day 1. He applied a log scale on the vertical axis. This is fine. Then in plot (c) (b), he retained the chart but changed the vertical axis labels so instead of absolute value of the investment, he shows percent changes relative to the initial value.

Why didn't he just plot the relative percent changes? Let y(t) be the absolute values and r(t) = the percent change = 100*(y(t) -1) is a simple linear transformation of y(t). This is where the log transform creates problems! The y(t) series is guaranteed to be positive since hitting y(t) = 0 means the entire investment is lost. However, the r(t) series can hit negative values and also cross over zero many times over time. Thus, log r(t) is inoperable. The problem is using the log transform for data that are not always positive, and the sleight of hand does not fix it!

Just pick any day in which the absolute return fell below $1, e.g. the last day of the plot in which the absolute value of the investment was down to $0.80. In the middle plot (b), the value depicted is ln(0.8) = -0.22. Note that the plot is in log scale, so what is labeled as $1 is really ln(1) = 0. If we instead try to plot the relative percent changes, then the day 1 number should be ln(0) which is undefined while the last number should be ln(-20%) which is also undefined.

This is another example of something umcomfortable about using log scales which I pointed out in this post. It's this idea that when we do log plots, we can freely substitute axis labels which are not directly proportional to the actual labels. It's plotting one thing, and labelling it something else. These labels are then disconnected from the visual encoding. It's against the goal of visualizing data.

 


Aligning the visual and the message

Today's post is about work by Diane Barnhart, who is a product manager at Bloomberg, and is taking Ray Vella's infographics class at NYU. The class is given a chart from the Economist, as well as some data on GDP per capita in selected countries at the regional level. The students are asked to produce data visualization that explores the change in income inequality (as indicated by GDP per capita).

Here is Diane's work:

Diane Barnhart_Rich Get Richer

In this chart, the key measure is the GDP per capita of different regions in Germany relative to the national average GDP. Hamburg, for example, has a GDP per capita that was 80% above the national average in 2000 while Leipzig's GDP per capita was 30% below the national average in 2000. (This metric is a bit of a head scratcher, and forms the basis of the Economist chart.)

***

Diane made several insightful design choices.

The key insight of this graph is also one of the easiest to see. It's the narrowing of the range of possible values. In 2000, the top value is about 90% while the bottom is under -40%, making a range of 130%. In 2020, the range has narrowed to 90%, with the values falling between 60% and -30%. In other words, the gap between rich and poor regions in Germany has reduced over these two decades.

The chosen chart form makes this message come alive.

Diane divided the regions into three groups, mapped to the black, red and yellow colors of the German flag. Black are for those regions that have GDP per capita above the average; yellow for those regions with GDP per capita over 25% below the average.

Instead of applying color to individual lines that trace the GDP metric over time for each region, she divided the area between the lines into three, and painted them. This necessitates a definition of the boundary line between colored areas over time. I gathered that she classified the regions using the latest GDP data (2020) and then traced the GDP trend lines back in time. Other definitions are also possible.

The two-column data table shown on the right provides further details that aren't found in the data visualization. The table is nicely enhanced with colors. They represent an augmentation of the information in the main chart, not a repetition.

All in all, this is a delightful project, and worthy of a top grade!


Coffee in different shapes and sizes: a test of self-sufficiency

Take a look at the following graphic showing top producers of coffee in 2o24:

Junkcharts_voronoicoffeeproduction

Then, try the following tasks:

  • Which country is the top producer?
  • What proportion of the world's production does the top country make?
  • Which countries form the top three?
  • How much is the "Rest of the World" compared to Brazil?
  • How many countries account for the top 50% of the world's production?
  • Does Indonesia or Columbia produce more coffee?
  • Compare India and Uganda
  • How about Honduras vs Peru?

I finished two cups of coffee and still couldn't answer most of these questions. How about you?

***

Now, let's look at the original chart, published by Voronoi, and sent to me by a long-time reader:

Visualcapitalist_coffee

Try those questions again, and the answers seem much more available.

How so?

What we've just demonstrated is that when the reader takes information from this graphic, the reader is consuming the data labels, while the visual encoding of data to shapes has offered zero help.

Given this finding, replacing the above chart with a data table would have achieved the same result, if not expediting understanding.

***

I'm using this graphic to illustrate my "self-sufficiency" test: by removing all data labels from the chart, we reveal how much work the visual elements are doing to enable understanding of the message and the underlying data.

***

Now, our long-time reader has a few comments, with which I agree:

  • what they did right: avoided the "let's just use a choropleth trap"
  • what went wrong? a) using shapes you can't compare at a glance
  • what went wrong? b) no color difference between the shapes
  • what went wrong? c) it looks like larger values are on top, except for Mexico which is squeezed up top for some reason

 

 

 

 

 

 


Patiently looking

Voronoi (aka Visual Economist) made this map about service times at emergency rooms around the U.S.

 

Voronoi_EmergencyRoomWaitTImes

This map shows why one shouldn’t just stick state-level data into a state-level map by default.

The data are median service times, defined as the duration of the visit from the moment a patients arrive to the moment they leave. For reasons to be explained below, I don’t like this metric. The data are in terms of hours and minutes, and encoded in the color scale.

As with any choropleth, the dominant features of this map are the shapes and sizes of various pieces but these don’t carry any data. The eastern seaboard contains many states that are small in area but dense in population, and always produces a messy, crowded smorgasbord of labels and guiding lines.

The color scale is progressive (continuous) making it even harder to gain an appreciation of the spatial pattern. For the sake of argument, imagine a truly continuous color scale tuned to the median service times in number of minutes. There would be as many shades as there are unique number of values on the map. For example, the state with 2 hr 12 min median time would receive a different shade than the one with 2 hr 11 min. Looking at the dataset, I found 43 unique values of median service time in the 52 states and territories. Thus, almost every state would wear its unique shade, making it hard to answer such common questions as: which cluster of states have high/medium/low median service times?

(As the underlying software may only be capable of printing a finite number of shades so in reality, there aren’t any true continuous scales. A continuous scale is just a discrete scale with many levels of shades. For this map, I’d group the states into at most five categories, requiring five shades.)

***

We’re now reaching the D corner of the Trifecta Checkup (link). _trifectacheckup_image

I’d transform the data to relative values, such as an index against the median or average in the nation. The colors now indicate how much higher or lower is the state’s median service time than that of the nation. With this transformed data, it makes more sense to use a bidirectional color scale so that there are different colors for higher vs lower than average.

Lastly, I’m not sure about the use of median service time, as opposed to average (mean) service time. I suspect that the distribution is heavily skewed toward longer values so that the median service time falls below the mean service time. If, however, the service time distribution is roughly symmetric around the median, then the mean and median service times will be very similar, and thus the metric selection doesn’t matter.

Imagine you're the healthcare provider and your bonus is based on managing median service times. You have an incentive to let a small number of patients wait an extraordinary amount of time, while serving a bunch of patients who require relatively simple procedures. If it's a mean service time, the values of the extreme outliers will be spread over all the patients while the median service time is affected by the number of such outliers but not their magnitudes.

When I pulled down the publicly available data (link), I found additional data fields. The emergency room visits are further broken into four categories (low, medium, high, very high), and a median is reported within each category. Thus, we have a little idea how extreme the top values can be.

The following dotplot shows this:

Junkcharts_redo_voronoi_emergencyrooms

A chart like this is still challenging to read since there are 52 territories, ordered by the value on a metric. If the analyst can say what are interesting questions, e.g. breaking up the territories into regions, then a grouping can be applied to the above chart to aid comprehension.

 


Tidying up the details

This column chart caught my attention because of the color labels.

Thall_financials2023_pandl

Well, it also concerns me that the chart takes longer to take in than you'd think.

***

The color labels say "FY2123", "FY2022", and "FY1921". It's possible but unlikely that the author is making comparisons across centuries. The year 2123 hasn't yet passed, so such an interpretation would map the three categories to long-ago past, present and far-into-the-future.

Perhaps hyphens were inadvertently left off so "FY2123" means "FY2021 - FY2023". It's odd to report financial metrics in multi-year aggregations. I rule this out because the three categories would then also overlap.

Here's what I think the mistake is: somehow the prefix is rolled forward when it is applied to the years. "FY23", "FY22", "FY21" got turned into "FY[21]23", "FY[20]22", "FY[19]21" instead of putting 20 in all three slots.

The chart appeared in an annual financial report, and the comparisons were mostly about the reporting year versus the year before so I'm pretty confident the last two digits are accurately represented.

Please let me know if you have another key to this puzzle.

In the following, I'm going to assume that the three colors represent the most recent three fiscal years.

***

A few details conspire to blow up our perception time.

There was no extra spacing between groups of columns.

The columns are arranged in reverse time order, with the most recent year shown on the left. (This confuses those of us that use the left-to-right convention.)

The colors are not ordered. If asked to sort the three colors, you will probably suggest what is described as "intuitive" below:

Junkcharts_color_order

The intuitive order aligns with the amount of black added to a base color (hue). But this isn't the order assigned to the three years on the original chart.

***

Some of the other details on the chart are well done. For example, I like the handling of the gridlines and the axes.

The following revision tidies up some of the details mentioned above, without changing the key features of the chart:

Junkcharts_redo_trinhallfinancials

 


Making colors and groups come alive

_numbersense_coverIn the May 2024 issue of Significance, there is an enlightening article (link, paywall) about a new measure of inflation being adopted by the U.K. government known as HCI (Household Costs Indices). This is expected to replace CPI which is the de facto standard measure used around the world. In Chapter 7 of Numbersense (link), I discuss the construction of the CPI, which critics have alleged is manipulated by public officials to be over-optimistic.

The HCI looks promising as it addresses several weaknesses in the CPI measure. First, it implements accounting for household spending on housing - this has always been a tricky subject, regarding those who own homes rather than rent. Second, it recognizes that the average inflation number, which represents the average price changes on the average basket of goods purchased by the average person, does not reflect the experience of many. The HCI measures are broken down into demographic subgroups, so it's possible to compare the HCI of retirees vs non-retirees, for example.

Then comes this multi-colored bar chart:

Sig_hci sm

***

The chart is servicable: the reader can find the story. For almost all the subgroups listed, the HCI measure comes in higher than the CPI measure (black). For the income deciles, the reader sense that the relationship is not linear, that is to say, inflation does not increase (or decrease) as income. It appears that inflation is highest at both ends of the spectrum, and lowest for those who are in deciles 6 to 8. The only subgroup for whom CPI overestimates inflation is "private renter," which totally makes sense since the CPI index previously did not account for "owner-occupier housing" cost.

This is a chart with 19 bars, and 19 colors. The colors do not encode any data at all, which is a bit wasteful. We can make the colors come alive by encoding subgroup identity. This is what the grouped bar chart looks like:

Junkcharts_redo_sig_hci_grouped_bars

While this is still messy, this version makes it a bit easier to compare across subgroups. The chart simultaneously plots four different grouping methods: by retired/not, by income deciles, by housing situation and by having children/not. Within each grouping, the segments are mutually exclusive but between the grouping, the segments are overlapping. For example, the same person can be counted in Retired, and having Children, and also some retirees have children while other don't.

***

To better display the interactions between groups and subgroups, I prefer using a dot plot.

Junkcharts_redo_sig_hci_dots

This is not a simple dot plot either. It's a grouped dot plot with four levels that correspond to each grouping method. One can see the distribution of HCI values across the subgroups within each grouping, and also compare the range of values from one group to another group.

One side benefit of using the dot plot is to get rid of the non-informative space between values 0 and 20. When using a bar chart, we have to start the bars at zero to avoid distorting the encoding. Not so for a dot plot.

P.S. In the next iteration, I'd consider flipping the axes as that might simplify labeling the subgroups.

 


Pie charts and self-sufficiency

This graphic shows up in a recent issue of Princeton alumni magazine, which has a series of pie charts.

Pu_aid sm

The story being depicted is clear: the school has been generously increasing the amount of financial aid given to students since 1998. The proportion receiving any aid went from 43% to 67% so about two out of three students who enrolled in 2023 are getting aid.

The key components of the story are the values in 1998 and 2023, and the growth trend over this period.

***

Here is an exercise worth doing. Think about how you figured out the story components.

Is it this?

Junkcharts_redo_pu_aid_1

Or is it this?

Junkcharts_redo_pu_aid_2

***

This is what I've been calling a "self-sufficiency test" (link). How much work are the visual elements doing in conveying the graph's message to you? If the visual elements aren't doing much, then the designer hasn't taken advantage of the visual medium.


When should we use bar charts?

Significance_13thfl sm

Two innocent looking column charts.

These came from an article in Significance magazine (link to paywall) that applies the "difference-in-difference" technique to analyze whether the superstitious act of skipping the number 13 when numbering floors in tall buildings causes an inflation of condo pricing.

The study authors are quite careful in their analysis, recognizing that building managers who decide to relabel the 13th floor as 14th may differ in other systematic ways from those who don't relabel. They use a matching technique to construct comparison groups. The left-side chart shows one effect of matching buildings, which narrowed the gap in average square footage between the relabeled and non-relabeled groups. (Any such gap suggests potential confounding; in a hypothetical, randomized experiment, the average square footage of both groups should be statistically identical.)

The left-side chart features columns that don't start as zero, thus the visualization exaggerates the differences. The degree of exaggeration here is tame: about 150 got chopped off at the bottom, which is about 10% of the total height. But why?

***

The right-side chart is even more problematic.

This chart shows the effect of matching buildings on the average age of the buildings (measured using the average construction year). Again, the columns don't start at zero. But for this dataset, zero is a meaningless value. Never make a column chart when the zero level has no meaning!

The story is simple: by matching, the average construction year in the relabeled group was brought closer to that in the non-relabeled group. The construction year is an ordinal categorical variable, with integer values. I think a comparison of two histograms will show the message clearer, and also provide more information than jut the two average values.


Adjust, and adjust some more

This Financial Times report illustrates the reason why we should adjust data.

The story explores the trend in economic statistics during 14 years of governing by conservatives. One of those metrics is so-called council funding (local governments). The graphic is interactive: as the reader scrolls the page, the chart transforms.

The first chart shows the "raw" data.

Ft_councilfunding1

The vertical axis shows year-on-year change in funding. It is an index relative to the level in 2010. From this line chart, one concludes that council funding decreased from 2010 to around 2016, then grew; by 2020, funding has recovered to the level of 2010 and then funding expanded rapidly in recent years.

When the reader scrolls down, this chart is replaced by another one:

Ft_councilfunding2

This chart contains a completely different picture. The line dropped from 2010 to 2016 as before. Then, it went flat, and after 2021, it started raising, even though by 2024, the value is still 10 percent below the level in 2010.

What happened? The data journalist has taken the data from the first chart, and adjusted the values for inflation. Inflation was rampant in recent years, thus, some of the raw growth have been dampened. In economics, adjusting for inflation is also called expressing in "real terms". The adjustment is necessary because the same dollar (hmm, pound) is worth less when there is inflation. Therefore, even though on paper, council funding in 2024 is more than 25 percent higher than in 2010, inflation has gobbled up all of that and more, to the point in which, in real terms, council funding has fallen by 20 percent.

This is one material adjustment!

Wait, they have a third chart:

Ft_councilfunding3

It's unfortunate they didn't stabilize the vertical scale. Relative to the middle chart, the lowest point in this third chart is about 5 percent lower, while the value in 2024 is about 10 percent lower.

This means, they performed a second adjustment - for population change. It is a simple adjustment of dividing by the population. The numbers look worse probably because population has grown during these years. Thus, even if the amount of funding stayed the same, the money would have to be split amongst more people. The per-capita adjustment makes this point clear.

***

The final story is much different from the initial one. Not only was the magnitude of change different but the direction of change reversed.

Whenever it comes to adjustments, remember that all adjustments are subjective. In fact, choosing not to adjust is also subjective. Not adjusting is usually much worse.

 

 

 

 


One doesn't have to plot raw data

Visual Capitalist chose a treemap to show us where gold is produced (link):

Viscap_gold2023

The treemap is embedded into a brick of gold. Any treemap is difficult to read, mostly because some block are vertical, others horizontal. A rough understanding is nevertheless possible: the entire global production can be roughly divided into four parts: China plus three other Asian producers account for roughly (not quite) a quarter; "rest of the world" (i.e. all countries not individually listed) is a quarter; Russia and Australia together is again a bit less than a quarter.

***

When I look at datasets that rank countries by some metric, I'm hoping to present insights, rather than the raw data. Insights typically involve comparing countries, or sets of countries, or one country against a set of countries. So, I made the following chart that includes some of these insights I found in the gold production dataset:

Junkcharts_redo_viscap_gold2023

For example, the top 4 producers in Asia account for almost a quarter of the world's output; Canada, U.S. and Australia together also roughly produce a quarter; the rest of the world has a similar output. In Asia, China's output is about the sum of the next 3 producers, which is about the same as U.S. and Canada, which is about the same as the top 5 in Africa.