It's a new term, and my friend Ray Vella shared some student projects from his NYU class on infographics. There's always something to learn from these projects.
The starting point is a chart published in the Economist a few years ago.
This is a challenging chart to read. To save you the time, the following key points are pertinent:
a) income inequality is measured by the disparity between regional averages
b) the incomes are given in a double index, a relative measure. For each country and year combination, the average national GDP is set to 100. A value of 150 means the richest region of Spain has an average income that is 50% higher than Spain's national average in the year 2015.
The original chart - as well as most of the student work - is based on a specific analysis plan. The difference in the index values between the richest and poorest regions is used as a measure of the degree of income inequality, and the change in the difference in the index values over time, as a measure of change in the degree of income inequality over time. That's as big a mouthful as the bag of words sounds.
This analysis plan can be summarized as:
1) all incomes -> relative indices, at each region-year combination
2) inequality = rich - poor region gap, at each region-year combination
3) inequality over time = inequality in 2015 - inequality in 2000, for each country
4) country difference = inequality in country A - inequality in country B, for each year
One student, J. Harrington, looks at the data through an alternative lens that brings clarity to the underlying data. Harrington starts with change in income within the richest regions (then the poorest regions), so that a worsening income inequality should imply that the richest region is growing incomes at a faster clip than the poorest region.
This alternative analysis plan can be summarized as:
1) change in income over time for richest regions for each country
2) change in income over time for poorest regions for each country
3) inequality = change in income over time: rich - poor, for each country
The restructuring of the analysis plan makes a big difference!
Here is one way to show this alternative analysis:
The underlying data have not changed but the reader's experience is transformed.
A twitter user alerted me to this chart put out by the Biden adminstration trumpeting a reduction in the budget deficit from 2020 to 2021:
This column chart embodies a form that is popular in many presentations, including in scientific journals. It's deficient in so many ways it's a marvel how it continues to live.
There are just two numbers: -3132 and -2772. Their difference is $360 billion, which is less than just over 10 percent of the earlier number. It's not clear what any data graphic can add.
Indeed, the chart does not do much. It obscures the actual data. What is the budget deficit in 2020? Readers must look at the axis labels, and judge that it's about a quarter of the way between 3000 and 3500. Five hundred quartered is 125. So it's roughly $3.125 trillion. Similarly, the 2021 number is slightly above the halfway point between 2,500 and 3,000.
These numbers are upside down. Taller columns are bad! Shortening the columns is good. It's all counter intuitive.
Column charts encode data in the heights of the columns. The designer apparently wants readers to believe the deficit has been cut by about a third.
As usual, this deception is achieved by cutting the column chart off at its knees. Removing equal sections of each column destroys the propotionality of the heights.
Why hold back? Here's a version of the chart showing the deficit was cut by half:
The relative percent reduction depends on where the baseline is placed. The only defensible baseline is the zero baseline. That's the only setting under which the relative percent reduction is accurately represented visually.
This same problem presents itself subtly in Covid-19 vaccine studies. I explain in this post, which I rate as one of my best Covid-19 posts. Check it out!
Here's a beauty by WSJ Graphics:
The article is here.
This data graphic illustrates the power of the visual medium. The underlying dataset is complex: power production by type of source by state by month by year. That's more than 90,000 numbers. They all reside on this graphic.
Readers amazingly make sense of all these numbers without much effort.
It starts with the summary chart on top.
The designer made decisions. The data are presented in relative terms, as proportion of total power production. Only the first and last years are labeled, thus drawing our attention to the long-term trend. The order of the color blocks is carefully selected so that the cleaner sources are listed at the top and the dirtier sources at the bottom. The order of the legend labels mirrors the color blocks in the area chart.
It takes only a few seconds to learn that U.S. power production has largely shifted away from coal with most of it substituted by natural gas. Other than wind, the green sources of power have not gained much ground during these years - in a relative sense.
The map offers multiple avenues for exploration.
Some readers may look at specific states. For example, California.
Currently, about half of the power production in California come from natural gas. Notably, there is no coal at all in any of these years. In addition to wind, solar energy has also gained. All of these insights come without the need for any labels or gridlines!
Hydroelectric energy is the dominant source in those two states, with wind gradually taking share.
At this point, readers realize that the summary chart up top hides remarkable state-level variations.
There are other paths through the map.
Some readers may scan the whole map, seeking patterns that pop out.
One such pattern is the cluster of states that use coal. In most of these states, the proportion of coal has declined.
Yet another path exists for those interested in specific sources of power.
For example, the trend in nuclear power usage is easily followed by tracking the purple. South Carolina, Illinois and New Hampshire are three states that rely on nuclear for more than half of its power.
The chart says they renounced nuclear energy. Here is some history. This one-time event caused a disruption in the time series, unique on the entire map.
This work is wonderful. Enjoy it!
This is part 2 of a review of a recent video released by NASA. Part 1 is here.
The NASA video that starts with the spiral chart showing changes in average global temperature takes a long time (about 1 minute) to run through 14 decades of data, and for those who are patient, the chart then undergoes a dramatic transformation.
With a sleight of hand, the chart went from a set of circles to a funnel. Here is a look:
What happens is the reintroduction of a time dimension. Imagine pushing the center of the spiral down into the screen to create a third dimension.
Our question as always is - what does this chart tell readers?
The chart seems to say that the variability of temperature has increased over time (based on the width of the funnel). The red/blue color says the temperature is getting hotter especially in the last 20-40 years.
When the reader looks beneath the surface, the chart starts to lose sense.
The width of the funnel is really a diameter of the spiral chart in the given year. But, if you recall, the diameter of the spiral (polar) chart isn't the same between any pairs of months.
In the particular rendering of this video, the width of the funnel is the diameter linking the April and October values.
Remember the polar gridlines behind the spiral:
Notice the hole in the middle. This hole has arbitrary diameter. It can be as big or as small as the designer makes it. Thus, the width of the funnel is as big or as small as the designer wants it. But the first thing that caught our attention is the width of the funnel.
The entire section between -1 and + 1 is, in fact, meaningless. In the following chart, I removed the core of the funnel, adding back the -1 degree line. Doing so exposes an incompatibility between the spiral and funnel views. The middle of the polar grid is negative infinity, a black hole.
For a moment, the two sides of the funnel look like they are mirror images. That's not correct, either. Each width of the funnel represents a year, and the extreme values represent April and October values. The line between those two values does not signify anything real.
Let's take a pair of values to see what I mean.
I selected two values for October 2021 and October 1899 such that the first value appears as a line double the length of the second. The underlying values are +0.99C and -0.04C, roughly speaking, +1 and 0, so the first value is definitely not twice the size of the second.
The funnel chart can be interpreted, in an obtuse way, as a pair of dot plots. As shown below, if we take dot plots for Aprils and Octobers of every year, turn the chart around, and then connect the corresponding dots, we arrive at the funnel chart.
This NASA effort illustrates a central problem in visual communications: attention (what Andrew Gelman calls "grabbiness") and information integrity. On the one hand, what's the point of an accurate chart when no one is paying attention? On the other hand, what's the point of a grabby chart when anyone who pays attention gets the wrong information? It's not easy to find that happy medium.
This video hides the lede so be patient or jump ahead to 0:56 and watch till the end.
Let's first describe what we are seeing.
The dataset consists of monthly average global temperature "anomalies" from 1880 to 2021 - an "anomaly" is the deviation of the average temperature that month from a reference level (seems like this is fixed at the average temperatures by month between 1951 and 1980).
A simple visualization of the dataset is this:
We see a gradual rise in temperature from the 1980s to today. The front half of this curve is harder to interpret. The negative values suggest that the average temperatures prior to 1951 are generally lower than the temperature in the reference period. Other than 1880-1910, temperatures have generally been rising.
Now imagine chopping up the above chart into yearly increments, 12 months per year. Then wrap each year's line into a circle, and place all these lines onto the following polar grid system.
Close but not quite there. The circles in the NASA video look much smoother. Two possibilities here. First is the aspect ratio. Note that the polar grid stretches the time axis to the full circle while the vertical axis is squashed. Not enough to explain the smoothness, as seen below.
The second possibility is additional smoothing between months.
The end result is certainly pretty:
Is it a good piece of scientific communications?
What is the chart saying?
I see red rings on the outside, white rings in the middle, and blue rings near the center. Red presumably means hotter, blue cooler.
The gridlines are painted over. The 0 degree (green) line is printed over again and again.
The biggest red circles are just beyond the 1 degree line with the excess happening in the January-March months. In making that statement, I'm inferring meaning to excess above 1 degree. This inference is purely based on where the 1-degree line is placed.
I also see in the months of December and January, there may have been "cooling", as the blue circles edge toward the -1 degree gridline. Drawing this inference actually refutes my previous claim. I had said that the bulge beyond the +1 degree line is informative because the designer placed the +1 degree line there. If I applied the same logic, then the location of the -1 degree line implies that only values more negative than -1 matter, which excludes the blue bulge!
Now what years are represented by these circles? Test your intuition. Are you tempted to think that the red lines are the most recent years, and the blue lines are the oldest years? If you think so, like I do, then we fall into a trap. We have now imputed two meanings to color -- temperature and recency, when the color coding can only hold one.
The only way to find out for sure is to rewind the tape and watch from the start. The year dimension is pushed to the background in this spiral chart. Instead, the month dimension takes precedence. Recall that at the start, the circles are white. The bluer circles appear in the middle of the date range.
This dimensional flip flop is a key difference between the spiral chart and the line chart (shown again for comparison).
In the line chart, the year dimension is primary while the month dimension is pushed to the background.
Now, we have to decide what the message of the chart should be. For me, the key message is that on a time scale of decades, the world has experienced a significant warming to the tune of about 1.5 degrees Celsius (35 F2.7 F). The warming has been more pronounced in the last 40 years. The warming is observed in all twelve months of the year.
Because the spiral chart hides the year dimension, it does not convey the above messages.
The spiral chart shares the same weakness as the energy demand chart discussed recently (link). Our eyes tend to focus on the outer and inner envelopes of these circles, which by definition are extreme values. Those values do not necessarily represent the bulk of the data. The spiral chart in fact tells us that there is not much to learn from grouping the data by month.
The appeal of a spiral chart for periodic data is similar to a map for spatial data. I don't recommend using maps unless the spatial dimension is where the signal lies. Similarly, the spiral chart is appropriate if there are important deviations from a seasonal pattern.
Daniel Z. tweeted about my post from last week. In particular, he took a deeper look at the chart of energy demand that put all hourly data onto the same plot, originally published at the StackOverflow blog:
I noted that this is not a great chart particularly since what catches our eyes are not the key features of the underlying data. Daniel made a clearly better chart:
This is a dot plot, rather than a line chart. The dots are painted in light gray, pushed to the background, because readers should be looking at the orange line. (I'm not sure what is going on with the horizontal scale as I could not get the peaks to line up on the two charts.)
What is this orange line? It's supposed to prove the point that the apparent dark band seen in the line chart does not represent the most frequently occurring values, as one might presume.
Looking closer, we see that the gray dots do not show all the hourly data but binned values.
We see vertical columns of dots, each representing a bin of values. The size of the dots represents the frequency of values of each bin. The orange line connects the bins with the highest number of values.
Daniel commented that
"The visual aggregation doesn't in fact map to the most frequently occurring values. That is because the ink of almost vertical lines fills in all the space between start and end."
Xan Gregg investigated further, and made a gif to show this effect better. Here is a screenshot of it (see this tweet):
The top chart is a true dot plot so that the darker areas are denser as the dots overlap. The bottom chart is the line chart that has the see-saw pattern. As Xan noted, the values shown are strangely very well behaved (aggregated? modeled?) - with each day, it appears that the values sweep up and down consistently. This means the values are somewhat evenly spaced on the underlying trendline, so I think this dataset is not the best one to illustrate Daniel's excellent point.
It's usually not a good idea to connect lots of dots with a single line.
[P.S. 3/21/2022: Daniel clarified what the orange line shows: "In the posted chart, the orange line encodes the daily demand average (the mean of the daily distribution), rounded, for displaying purposes, to the closed bin. Bin size = 1000. Orange could have encode the daily median as well."]
Twitter users were incensed by this chart:
It's being slammed as one of the most outrageous charts ever.
An image search reveals this chart form has international appeal.
In Arabic, but the image source is a Spanish company:
In English, from an Indian source:
Some people are calling this a pie chart.
But it isn't a pie chart since the slices clearly add up to more than one full circle.
It may be a graph template from an infographics website. You see people are applying data labels without changing the sizes or orientation or even colors of the slices. So the chart form is used as a container for data, rather than an encoder.
The Twitter user who called this "outrageous" appears to want to protect the designer, as the words have been deliberately snipped from the chart.
Nevertheless, Molly White coughed up the source in a subsequent tweet.
A bit strange, if you stop and think a little. Why would Molly shame the designer 20 hours later after she decided not to?
According to Molly, the chart appeared on the website of an NFT company. [P.S. See note below]
Here's the top of the page that Molly White linked to:
Notice the author of this page. That's "Molly White", who is the owner of this NFT company! [See note below: she's the owner of a satire website who was calling out the owner of this company.]
Who's more outrageous?
Someone creating the most outrageous chart in order to get clout from outraged Twitter users and drive traffic to her new NFT venture? Or someone creating the template for the outrageous chart form, spawning an international collection?
[P.S. 3/17/2022 The answer is provided by other Twitter users, and the commentors. The people spreading this chart form is more ourageous. I now realized that Molly runs a sarcastic site. When she linked to the "source", she linked to her own website, which I interpreted as the source of the image. The page did contain that image, which added to the confusion. I must also add her work looks valuable, as it assesses some of the wild claims in Web3 land.
[P.S. 3/17/2022 Molly also pointed out that her second tweet about the source came around 45 minutes after the first tweet. Twitter showed "20 hours" because it was 20 hours from the time I read the tweet.]
A long-time reader sent me the following chart from a Nature article, pointing out that it is rather worthless.
The simple bar chart plots the number of downloads, organized by country, from the website called Sci-Hub, which I've just learned is where one can download scientific articles for free - working around the exorbitant paywalls of scientific journals.
The bar chart is a good example of a Type D chart (Trifecta Checkup). There is nothing wrong with the purpose or visual design of the chart. Nevertheless, the chart paints a misleading picture. The Nature article addresses several shortcomings of the data.
The first - and perhaps most significant - problem is that many Sci-Hub users are expected to access the site via VPN servers that hide their true countries of origin. If the proportion of VPN users is high, the entire dataset is called into doubt. The data would contain both false positives (in countries with VPN servers) and false negatives (in countries with high numbers of VPN users).
The second problem is seasonality. The dataset covered only one month. Many users are expected to be academics, and in the southern hemisphere, schools are on summer vacation in January and February. Thus, the data from those regions may convey the wrong picture.
Another problem, according to the Nature article, is that Sci-Hub has many competitors. "The figures include only downloads from original Sci-Hub websites, not any replica or ‘mirror’ site, which can have high traffic in places where the original domain is banned."
This mirror-site problem may be worse than it appears. Yes, downloads from Sci-Hub underestimate the entire market for "free" scientific articles. But these mirror sites also inflate Sci-Hub statistics. Presumably, these mirror sites obtain their inventory from Sci-Hub by setting up accounts, thus contributing lots of downloads.
Even if VPN and seasonality problems are resolved, the total number of downloads should be adjusted for population. The most appropriate adjustment factor is the population of scientists, but that statistic may be difficult to obtain. A useful proxy might be the number of STEM degrees by country - obtained from a UNESCO survey (link).
A metric of the type "number of Sci-Hub downloads per STEM degree" sounds odd and useless. I'd argue it's better than the unadjusted total number of Sci-Hub downloads. Just don't focus on the absolute values but the relative comparisons between countries. Even better, we can convert the absolute values into an index to focus attention on comparisons.
This post is the second post in response to a blog post at StackOverflow (link) in which the author discusses the "harm" of "aggregating away the signal" in your dataset. The first post appears on my book blog earlier this week (link).
One stop in their exploratory data analysis journey was the following chart:
This chart plots all the raw data, all 8,760 values of electricity consumption in California in 2020. Most analysts know this isn't a nice chart, and it's an abuse of ink. This chart is used as a contrast to the 4-week moving average, which was hoisted up as an example of "over-aggregation".
Why is the above chart bad (aside from the waste of ink)? Think about how you consume the information. For me, I notice these features in the following order:
- I see the upper "envelope" of the data, i.e. the top values at each hour of each day throughout the year. This gives me the seasonal pattern with a peak in the summer months.
- I see the lower "envelope" of the data
- I see the "height" of the data, which is, roughly speaking, the range of values within a day
- If I squint hard enough, I see a darker band within the band, which roughly maps to the most frequently occurring values (this feature becomes more prominent if we select a lighter shade of gray)
The chart may not be as bad as it looks. The "moving average" is sort of visible. The variability of consumption is visible. The primary problem is it draws attention to the outliers, rather than the more common values.
The envelope of any dataset is composed of extreme values, by definition. For most analysis objectives, extreme values are "noise". In the chart above, it's hard to tell how common the maximum values are relative to other possible values but it's the upper envelope that captures my attention - simply because it's the easiest trend to make out.
The same problem actually surfaces in the "improved" chart:
As explained in the preceding post, this chart rearranges the data. Instead of a single line, therea are now 52 overlapping lines, one for each week of the year. So each line is much less dense and we can make out the hour of day/day of week pattern.
Notice that the author draws attention to the upper envelope of this chart. They notice the line(s) near the top are from the summer, and this further guides their next analysis.
The reason for focusing on the envelope is the same as in the other chart. Where the lines are dense, it's not easy to make out the pattern.
Even the envelope is not as clear as it seems! There is no reason why the highlighted week (August 16 to 23) should have the highest consumption value each hour of each day of the week. It's possible that the line dips into the middle of the range at various points along the line. In the following chart, I highlight two time points in which lines may or may not have crossed:
In an interactive chart, each line can be highlighted to resolve the confusion.
Note that the lower envelope is much harder to decipher, given the density of lines.
The author then pursues a hypothesis that there are lines (weeks) with one intra-day peak and there are those with two peaks.
I'd propose that those are not discrete states but continuous. The base pattern can be one with two peaks, a higher peak in the evening, and a lower peak in the morning. Now, if you imagine pushing up the evening peak while holding the lower peak at its height, you'd gradually "erase" the lower peak but it's just receded into the background.
Possibly the underlying driver is the total demand for energy. The higher the demand, the more likely it's concentrated in the evening, which causes the lower peak to recede. The lower the demand, the more likely we see both peaks.
In either case, the prior chart drives the direction of the next analysis.