Deliberately obstructing chart elements as a plot point

Bbc_globalwarming_ridgeplot smThese "ridge plots" have become quite popular in recent times. The following example, from this BBC report (link), shows the change in global air temperatures over time.

***

This chart is in reality a panel of probability density plots, one for each year of the dataset. The years are arranged with the oldest at the top and the most recent at the bottom. You take those plots and squeeze every ounce of the space out, so that each chart overlaps massively with the ones above it.

The plot at the bottom is the only one that can be seen unobstructed.

Overplotting chart elements, deliberately obstructing them, doesn't sound useful. Is there something gained for what's lost?

***

The appeal of the ridge plot is the metaphor of ridges, or crests if you see ocean waves. What do these features signify?

The legend at the bottom of the chart gives a hint.

The main metric used to describe global warming is the amount of excess temperature, defined as the temperature relative to a historical average, set as the average temperature during the pre-industrial age. In recent years, the average global temperature is about 1.5 degrees Celsius above the reference level.

One might think that the higher the peak in a given plot, the higher the excess temperature. Not so. The heights of those peaks do not indicate temperatures.

What's the scale of the vertical axis? The labels suggest years, but that's a distractor also. If we consider the panel of non-overlapping probability density charts, the vertical axis should show probability density. In such a panel, the year labels should go to the titles of individual plots. On the ridge plot, the density axes are sacrificed, while the year labels are shifted to the vertical axis.

Admittedly, probability density is not an intuitive concept, so not much is lost by its omission.

The legend appears to suggest that the vertical scale is expressed in number of days so that in any given year, the peak of the curve occurs where the most likely excess temperature is found. But the amount of excess is read from the horizontal axis, not the vertical axis - it is encoded as a displacement in location horizontally away from the historical average. In other words, the height of the peak still doesn't correlate with the magnitude of the excess temperature.

The following set of probability density curves (with made-up data) each has the same average excess temperature of 1.5 degrees. Going from top to bottom, the variability of the excess temperatures increases. The height of the peak decreases accordingly because in a density plot, we require the total area under the curve to be fixed. Thus, the higher the peak, the lower the daily variability of the excess temperature.

Kfung_pdf_variances

A problem with this ridge plot is that it draws our attention to the heights of the peaks, which provide information about a secondary metric.

If we want to find the story that the amount of excess temperature has been increasing over time, we would have to trace a curve through the ridges, which strangely enough is a line that moves top to bottom, initially somewhat vertically, then moving sideways to the right. In a more conventional chart, the line that shows growth over time moves from bottom left to top right.

***

The BBC article (link) features several charts. The first one shows how the average excess temperature trends year to year. This is a simple column chart. By supplementing the column chart with the ridge plot, I assume that the designer wants to tell readers that the average annual excess temperature masks daily variability. Therefore, each annual average has been disaggregated into 366 daily averages.

In the column chart, the annual average is compared to the historical average of 50 years. In the ridge plot, the daily average is compared to ... the same historical average of 50 years. That's what the reference line labeled pre-industrial average is saying to me.

It makes more sense to compare the 366 daily averages to 366 daily averages from those 50 years.

But now I've ruined the dataviz because in each probability density plot, there are 366 different reference points. But not really. We just have to think a little more abstractly. These 366 different temperatures are all mapped to the number zero, after adjustment. Thus, they all coincide at the same location on the horizontal axis.

(It's possible that they actually used 366 daily averages as references to construct the ridge plot. I'm guessing not but feel free to comment if you know how these values are computed.)


Organizing time-stamped data

In a previous post, I looked at the Economist chart about Elon Musk's tweeting compulsion. It's chart that contains lots of data, every tweet included, but one can't tell the number or frequency of tweets.

In today's post, I'll walk through a couple of sketches of other charts. I was able to find a dataset on Github that does not cover the same period of time but it's good enough for illustration purposes.

As discussed previously, I took cues from the Economist chart, in particular that the hours of the day should be divided up into four equal-width periods. One thing Musk is known for is tweeting at any hour of the day.

Junkcharts_redo_musktweets_columnsbyhourgroup

This is a small-multiples arrangement of column charts. Each column chart represents the tweets that were posted during a six-hour window, across all days in the dataset. A column covers half a year of tweets. We note that there were more tweets in the afternoon hours as he started tweeting more. In the first half of 2022, he sent roughly 750 tweets between 7 pm and midnight.

***

In this next sketch, I used a small-multiples of line charts. Each line chart represents tweets posted during a six-hour window, as before. Instead of counting how many tweets, here I "smoothed" the daily tweet count, so that each number is an average daily tweet count, with the average computed based on a rolling time window.

Junkcharts_redo_musktweets_sidebysidelines

 

***

Finally, let's cover a few details only people who make charts would care about. The time of day variable only makes sense if all times are expressed as "local time", i.e. the time at the location where Musk was tweeting from. This knowledge is not necessary to make a chart but it is essential to make the chart interpretable. A statement like Musk tweets a lot around midnight assumes that it was midnight where he was when he sent each tweet.

Since we don't have his travel schedule, we will definitely be wrong. In my charts, I assumed he is in the Pacific time zone, and never tweeted anywhere outside that time zone.

(Food for thought: the server that posts tweets certainly had the record of the time and time zone for each tweet. Typically, databases store these time stamps standardized to one time zone - call it Greenwich Mean Time. If you have all time stamps expressed in GMT, is it now possible to make a statement about midnight tweeting? Does standardizing to one time zone solve this problem?)

In addition, I suspect that there may be problems with the function used to compute those rolling sums and averages, so take the actual numbers on those sketches with a grain of salt. Specifically, it's hard to tell on any of these charts but Musk did not tweet every single day so there are lots of holes in the time series.


Don't show everything

There are many examples where one should not show everything when visualizing data.

A long-time reader sent me this chart from the Economist, published around Thanksgiving last year:

Economist_musk

It's a scatter plot with each dot representing a single tweet by Elon Musk against a grid of years (on the horizontal axis) and time of day (on the vertical axis).

The easy messages to pick up include:

  • the increase in frequency of tweets over the years
  • especially, the jump in density after Musk bought Twitter in late 2022 (there is also a less obvious level up around 2018)
  • the almost continuous tweeting throughout 24 hours.

By contrast, it's hard if not impossible to learn the following:

  • how many tweets did he make on average or in total per year, per day, per hour?
  • the density of tweets for any single period of time (i.e., a reference for everything else)
  • the growth rate over time, especially the magnitude of the jumps

The paradox: a chart that is data-dense but information-poor.

***

The designer added gridlines and axis labels to help structure our reading. Specifically, we're cued to separate the 24 hours into four 6-hour chunks. We're also expected to divide the years into two groups (pre- and post- the Musk acquisition), and secondarily, into one-year intervals.

If we accept this analytical frame, then we can divide time into these boxes, and then compute summary statistics within each box, and present those values.  I'm working on some concepts, will show them next time.

 


Ranks, labels, metrics, data and alignment

A long-time reader Chris V. (since 2012!) sent me to this WSJ article on airline ratings (link).

The key chart form is this:

Wsj_airlines_overallranks

It's a rhombus shaped chart, really a bar chart rotated counter-clockwise by 45 degrees. Thus, all the text is at 45 degree angles. An airplane icon is imprinted on each bar.

There is also this cute interpretation of the white (non-data-ink) space as a symmetric reflection of the bars (with one missing element). On second thought, the decision to tilt the chart was probably made in service of this quasi-symmetry. If the data bars were horizontal, then the white space would have been sliced up into columns, which just doesn't hold the same appeal.

If we be Tuftian, all of these flourishes do not serve the data. But do they do much harm? This is a case that's harder to decide. The data consist of just a ranking of airlines. The message still comes across. The head must tilt, but the chart beguiles.

***

As the article progresses, the same chart form shows up again and again, with added layers of detail. I appreciate how the author has constructed the story. Subtly, the first chart teaches the readers how the graphic encodes the data, and fills in contextual information such as there being nine airlines in the ranking table.

In the second section, the same chart form is used, while the usage has evolved. There are now a pair of these rhombuses. Each rhombus shows the rankings of a single airline while each bar inside the rhombus shows the airline's ranking on a specific metric. Contrast this with the first chart, where each bar is an airline, and the ranking is the overall ranking on all metrics.

Wsj_airlines_deltasouthwestranks

You may notice that you've used a piece of knowledge picked up from the first chart - that on each of these metrics, each airline has been ranked against eight others. Without that knowledge, we don't know that being 4th is just better than the median. So, in a sense, this second section is dependent on the first chart.

There is a nice use of layering, which links up both charts. A dividing line is drawn between the first place (blue) and not being first (gray). This layering allows us to quickly see that Delta, the overall winner, came first in two of the seven metrics while Southwest, the second-place airline, came first in three of the seven (leaving two metrics for which neither of these airlines came first).

I'd be the first to admit that I have motion sickness. I wonder how many of you are starting to feel dizzy while you read the labels, heads tilted. Maybe you're trying, like me, to figure out the asterisks and daggers.

***

Ironically, but not surprisingly, the asterisks reveal a non-trivial matter. Asterisks direct readers to footnotes, which should be supplementary text that adds color to the main text without altering its core meaning. Nowadays, asterisks may hide information that changes how one interprets the main text, such as complications that muddy the main argument.

Here, the asterisks address a shortcoming of representing ranking using bars. By convention, lower ranking indicates better, and most ranking schemes start counting from 1. If ranks are directly encoded in bars, then the best object is given the shortest bar. But that's not what we see on the chart. The bars actually encode the reverse ranking so the longest bar represents the lowest ranking.

That's level one of this complication. Level two is where these asterisks are at.

Notice that the second metric is called "Canceled flights". The asterisk quipped "fewest". The data collected is on the number of canceled flights but the performance metric for the ranking is really "fewest canceled flights". 

If we see a long bar labelled "1st" under "canceled flights", it causes a moment of pause. Is the airline ranked first because it had the most canceled flights? That would imply being first is worst for this category. It couldn't be that. So perhaps "1st" means having the fewest canceled flights but then it's just weird to show that using the longest bar. The designer correctly anticipates this moment of pause, and that's why the chart has those asterisks.

Unfortunately, six out of the seven metrics require asterisks. In almost every case, we have to think in reverse. "Extreme delays" really mean "Least extreme delays"; "Mishandled baggage" really mean "Less mishandled baggage"; etc. I'd spend some time renaming the metrics to try to fix this avoiding footnotes. For example, saying "Baggage handling" instead of "mishandled baggage" is sufficient.

***

The third section contains the greatest details. Now, each chart prints the ranking of nine airlines for a particular metric.

Wsj_airlinerankings_bymetric

 

By now, the cuteness faded while the neck muscles paid. Those nice annotations, written horizontally, offered but a twee respite.