What do I think about spirals?

A twitter user asked how I feel about this latest effort (from NASA) to illustrate global warming. To see the entire video, go to their website.

Nasa_climatespiral_fullperiod

This video hides the lede so be patient or jump ahead to 0:56 and watch till the end.

Let's first describe what we are seeing.

The dataset consists of monthly average global temperature "anomalies" from 1880 to 2021 - an "anomaly" is the deviation of the average temperature that month from a reference level (seems like this is fixed at the average temperatures by month between 1951 and 1980).

A simple visualization of the dataset is this:

Junkcharts_redo_nasasprials_longline

We see a gradual rise in temperature from the 1980s to today. The front half of this curve is harder to interpret. The negative values suggest that the average temperatures prior to 1951 are generally lower than the temperature in the reference period. Other than 1880-1910, temperatures have generally been rising.

Now imagine chopping up the above chart into yearly increments, 12 months per year. Then wrap each year's line into a circle, and place all these lines onto the following polar grid system.

Junkcharts_redo_nasaspiral_linesandcircles

Close but not quite there. The circles in the NASA video look much smoother. Two possibilities here. First is the aspect ratio. Note that the polar grid stretches the time axis to the full circle while the vertical axis is squashed. Not enough to explain the smoothness, as seen below.

Junkcharts_redo_nasaspirals_unsmoothedwide

The second possibility is additional smoothing between months.

Junkcharts_redo_nasaspirals_smoothedlines

The end result is certainly pretty:

Nasa_climatespiral_fullperiod

***

Is it a good piece of scientific communications?

What is the chart saying?

I see red rings on the outside, white rings in the middle, and blue rings near the center. Red presumably means hotter, blue cooler.

The gridlines are painted over. The 0 degree (green) line is printed over again and again.

The biggest red circles are just beyond the 1 degree line with the excess happening in the January-March months. In making that statement, I'm inferring meaning to excess above 1 degree. This inference is purely based on where the 1-degree line is placed.

I also see in the months of December and January, there may have been "cooling", as the blue circles edge toward the -1 degree gridline. Drawing this inference actually refutes my previous claim. I had said that the bulge beyond the +1 degree line is informative because the designer placed the +1 degree line there. If I applied the same logic, then the location of the -1 degree line implies that only values more negative than -1 matter, which excludes the blue bulge!

Now what years are represented by these circles? Test your intuition. Are you tempted to think that the red lines are the most recent years, and the blue lines are the oldest years? If you think so, like I do, then we fall into a trap. We have now imputed two meanings to color -- temperature and recency, when the color coding can only hold one.

The only way to find out for sure is to rewind the tape and watch from the start. The year dimension is pushed to the background in this spiral chart. Instead, the month dimension takes precedence. Recall that at the start, the circles are white. The bluer circles appear in the middle of the date range.

This dimensional flip flop is a key difference between the spiral chart and the line chart (shown again for comparison).

Junkcharts_redo_nasasprials_longline

In the line chart, the year dimension is primary while the month dimension is pushed to the background.

Now, we have to decide what the message of the chart should be. For me, the key message is that on a time scale of decades, the world has experienced a significant warming to the tune of about 1.5 degrees Celsius (35 F2.7 F). The warming has been more pronounced in the last 40 years. The warming is observed in all twelve months of the year.

Because the spiral chart hides the year dimension, it does not convey the above messages.

The spiral chart shares the same weakness as the energy demand chart discussed recently (link). Our eyes tend to focus on the outer and inner envelopes of these circles, which by definition are extreme values. Those values do not necessarily represent the bulk of the data. The spiral chart in fact tells us that there is not much to learn from grouping the data by month. 

The appeal of a spiral chart for periodic data is similar to a map for spatial data. I don't recommend using maps unless the spatial dimension is where the signal lies. Similarly, the spiral chart is appropriate if there are important deviations from a seasonal pattern.

 

 


Dots, lines, and 2D histograms

Daniel Z. tweeted about my post from last week. In particular, he took a deeper look at the chart of energy demand that put all hourly data onto the same plot, originally published at the StackOverflow blog:

Stackoverflow_variabilitychart

I noted that this is not a great chart particularly since what catches our eyes are not the key features of the underlying data. Daniel made a clearly better chart:

Danielzvinca_densitychart

This is a dot plot, rather than a line chart. The dots are painted in light gray, pushed to the background, because readers should be looking at the orange line. (I'm not sure what is going on with the horizontal scale as I could not get the peaks to line up on the two charts.)

What is this orange line? It's supposed to prove the point that the apparent dark band seen in the line chart does not represent the most frequently occurring values, as one might presume.

Looking closer, we see that the gray dots do not show all the hourly data but binned values.

Danielzvinca_densitychart_inset
We see vertical columns of dots, each representing a bin of values. The size of the dots represents the frequency of values of each bin. The orange line connects the bins with the highest number of values.

Daniel commented that

"The visual aggregation doesn't in fact map to the most frequently occurring values. That is because the ink of almost vertical lines fills in all the space between start and end."

Xan Gregg investigated further, and made a gif to show this effect better. Here is a screenshot of it (see this tweet):

Xangregg_dots_vs_line

The top chart is a true dot plot so that the darker areas are denser as the dots overlap. The bottom chart is the line chart that has the see-saw pattern. As Xan noted, the values shown are strangely very well behaved (aggregated? modeled?) - with each day, it appears that the values sweep up and down consistently.  This means the values are somewhat evenly spaced on the underlying trendline, so I think this dataset is not the best one to illustrate Daniel's excellent point.

It's usually not a good idea to connect lots of dots with a single line.

 

[P.S. 3/21/2022: Daniel clarified what the orange line shows: "In the posted chart, the orange line encodes the daily demand average (the mean of the daily distribution), rounded, for displaying purposes, to the closed bin. Bin size = 1000. Orange could have encode the daily median as well."]

 


The envelope of one's data

This post is the second post in response to a blog post at StackOverflow (link) in which the author discusses the "harm" of "aggregating away the signal" in your dataset. The first post appears on my book blog earlier this week (link).

One stop in their exploratory data analysis journey was the following chart:

Stackoverflow_variabilitychart

This chart plots all the raw data, all 8,760 values of electricity consumption in California in 2020. Most analysts know this isn't a nice chart, and it's an abuse of ink. This chart is used as a contrast to the 4-week moving average, which was hoisted up as an example of "over-aggregation".

Why is the above chart bad (aside from the waste of ink)? Think about how you consume the information. For me, I notice these features in the following order:

  1. I see the upper "envelope" of the data, i.e. the top values at each hour of each day throughout the year. This gives me the seasonal pattern with a peak in the summer months.
  2. I see the lower "envelope" of the data
  3. I see the "height" of the data, which is, roughly speaking, the range of values within a day
  4. If I squint hard enough, I see a darker band within the band, which roughly maps to the most frequently occurring values (this feature becomes more prominent if we select a lighter shade of gray)

The chart may not be as bad as it looks. The "moving average" is sort of visible. The variability of consumption is visible. The primary problem is it draws attention to the outliers, rather than the more common values.

The envelope of any dataset is composed of extreme values, by definition. For most analysis objectives, extreme values are "noise". In the chart above, it's hard to tell how common the maximum values are relative to other possible values but it's the upper envelope that captures my attention - simply because it's the easiest trend to make out.

***

The same problem actually surfaces in the "improved" chart:

Stackoverflow_weekofyearchart

As explained in the preceding post, this chart rearranges the data. Instead of a single line, therea are now 52 overlapping lines, one for each week of the year. So each line is much less dense and we can make out the hour of day/day of week pattern.

Notice that the author draws attention to the upper envelope of this chart. They notice the line(s) near the top are from the summer, and this further guides their next analysis.

The reason for focusing on the envelope is the same as in the other chart. Where the lines are dense, it's not easy to make out the pattern.

Even the envelope is not as clear as it seems! There is no reason why the highlighted week (August 16 to 23) should have the highest consumption value each hour of each day of the week. It's possible that the line dips into the middle of the range at various points along the line. In the following chart, I highlight two time points in which lines may or may not have crossed:

Junkcharts_stackoverflow_confusingenvelope

In an interactive chart, each line can be highlighted to resolve the confusion.

Note that the lower envelope is much harder to decipher, given the density of lines.

***
The author then pursues a hypothesis that there are lines (weeks) with one intra-day peak and there are those with two peaks.

I'd propose that those are not discrete states but continuous. The base pattern can be one with two peaks, a higher peak in the evening, and a lower peak in the morning. Now, if you imagine pushing up the evening peak while holding the lower peak at its height, you'd gradually "erase" the lower peak but it's just receded into the background.

Possibly the underlying driver is the total demand for energy. The higher the demand, the more likely it's concentrated in the evening, which causes the lower peak to recede. The lower the demand, the more likely we see both peaks.

In either case, the prior chart drives the direction of the next analysis.

 

 

 

 

 


Type D charts

A twitter follower sent the following chart:

China_military_spending

It's odd to place the focus on China when the U.S. line is much higher, and the growth in spending in the last few years in the U.S. is much higher than the growth rate in China.

_trifectacheckup_imageIn the Trifecta Checkup, this chart is Type D (link): the data are at odds with the message of the chart. The intended message likely is China is building up its military in an alarming way. This dataset does not support such a conclusion.

The visual design of the chart can't be faulted though. It's clean, and restrained. It even places line labels at the end of each line. Also, the topic of the chart - the arms race - is unambiguous.

One fix is to change the message to bring it in line with the data. If the question being addressed is which country spends the most on the military, or which country has been raising spending at the fastest rate, then the above chart is appropriate.

If the question is about spending in China, then a different measure such as average annual spending increase may work.

Neither solution requires changing the visual form. That's why data visualization excellence is more than just selecting the right chart form.


Start at zero, or start at wherever

Andrew's post about start-at-zero helps me refine my own thinking on this evergreen topic.

The specific example he gave is this one:

Andrewgelman_invitezeroin

The dataset is a numeric variable (y) with values over time (x). The minimum numeric value is around 3 and the range of values is from around 3 to just above 20. His advice is "If zero is in the neighborhood, invite it in". (Link)

The rule, as usual, sounds simpler than it really is. In the discussion, Andrew highlights several considerations.

Is zero a meaningful reference value? In his example, we assume it is and so we invite zero in. But, as Andrew also says, if zero is meaningless, then recall the invitation. So context must be accounted for.

In Chapter 1 of Numbersense (link), I looked at some SAT score data of applicants to competitive colleges. Is zero a meaningful reference value for SAT scores? Someone might argue yes, since it is the theoretical minimum score that anyone could get from the test. Any statistician will likely say no, since a competitive college will have never seen an applicant submitting a score of zero, or anywhere close to zero. Thus, starting such a chart at zero inserts a lot of whitespace and draws attention to a useless insight - how far above the theoretical worst performer is someone's score.

***

What about the left panel of Andrew's chart makes us uncomfortable? I ask myself this question. My answer is that the horizontal axis highlights an arbitrary value that distracts from the key patterns of the data.

As shown below, the arbitrary value is ~2.5. This is utterly meaningless.

Redo_andrewgelman_invitezeroin

What if 0 is also a meaningless value for this dataset? I'd recommend "bench the axis". Like this:

Redo_andrewgelman_benchtheaxis

An axis is a tool to help readers understand a chart. If it isn't serving a function, an axis doesn't need to be there. When I choose a line chart for time-series data, I'm drawing attention to temporal change in the numeric values, or the range of values. I'm not saying something about the values relative to some reference number.

From this example, we also see that the horizontal axis should not be regarded as a hanger for time labels. Time labels can exist by themselves.

 

 


How does the U.K. vote in the U.N.?

Through my twitter feed, I found my way to this chart, made by jamie_bio.

Jamie_bio_un_votes25032021

This is produced using R code even though it looks like a slide.

The underlying dataset concerns votes at the United Nations on various topics. Someone has already classified these topics. Jamie looked at voting blocs, specifically, countries whose votes agree most often or least often with the U.K.

If you look at his Github, this is one in a series of works he produced to hone his dataviz skills. Ultimately, I think this effort can benefit from some re-thinking. However, I also appreciate the work he has put into this.

Let's start with the things I enjoyed.

Given the dataset, I imagine the first visual one might come up with is a heatmap that shows countries in rows and topics in columns. That would work ok, as any standard chart form would but it would be a data dump that doesn't tell a story. There are almost 200 countries in the entire dataset. The countries can only be ordered in one way so if it's ordered for All Votes, it's not ordered for any of the other columns.

What Jamie attempts here is story-telling. The design leads the reader through a narrative. We start by reading the how-to-read-this box on the top left. This tells us that he's using a lunar eclipse metaphor. A full circle in blue indicates 0% agreement while a full circle in white indicates 100% agreement. The five circles signal that he's binning the agreement percentages into five discrete buckets, which helps simplify our understanding of the data.

Then, our eyes go to the circle of circles, labelled "All votes". This is roughly split in half, with the left side showing mostly blue and the right showing mostly white. That's because he's extracting the top 5 and bottom 5 countries, measured by their vote alignment with the U.K. The countries names are clearly labelled.

Next, we see the votes broken up by topics. I'm assuming not all topics are covered but six key topics are highlighted on the right half of the page.

What I appreciate about this effort is the thought process behind how to deliver a message to the audience. Selecting a specific subset that addresses a specific question. Thinning the materials in a way that doesn't throw the kitchen sink at the reader. Concocting the circular layout that presents a pleasing way of consuming the data.

***

Now, let me talk about the things that need more work.

I'm not convinced that he got his message across. What is the visual telling us? Half of the cricle are aligned with the U.K. while half aren't so the U.K. sits on the fence on every issue? But this isn't the message. It's a bit of a mirage because the designer picked out the top 5 and bottom 5 countries. The top 5 are surely going to be voting almost 100% with the U.K. while the bottom 5 are surely going to be disagreeing with the U.K. a lot.

I did a quick sketch to understand the whole distribution:

Redo_junkcharts_ukvotes_overview_2

This is not intended as a show-and-tell graphic, just a useful way of exploring the dataset. You can see that Arms Race/Disarmament and Economic Development are "average" issues that have the same form as the "All issues" line. There are a small number of countries that are extremely aligned with the UK, and then about 50 countries that are aligned over 50% of the time, then the other 150 countries are within the 30 to 50% aligned. On human rights, there is less alignment. On Palestine, there is more alignment.

What the above chart shows is that the top 5 and bottom 5 countries both represent thin slithers of this distribution, which is why in the circular diagrams, there is little differentiation. The two subgroups are very far apart but within each subgroup, there is almost no variation.

Another issue is the lunar eclipse metaphor. It's hard to wrap my head around a full white circle indicating 100% agreement while a full blue circle shows 0% agreement.

In the diagrams for individual topics, the two-letter acronyms for countries are used instead of the country names. A decoder needs to be provided, or just print the full names.

 

 

 

 

 

 


Surging gas prices

A reader finds this chart hard to parse:

Twitter_mta_gasprices

The chart shows the trend in gas prices in New York in the past two years.

This is a case in which the simple line chart works very well.

Junkcharts_redo_mtagasprices

I added annotations as the reasons behind the decline and rise in prices are reasonably clear. 

One should be careful when formatting dates. The legend of the original chart looks like this:

Mta_gasprices_date_legend

In the U.S., dates typically use a M/D/Y format. The above dates are ambiguous. "Aug 19" can be August 19th or August, xx19.


Asymmetry and orientation

An author in Significance claims that a single season of Premier League football without live spectators is enough to prove that the so-called home field advantage is really a live-spectator advantage.

The following chart depicts the data going back many seasons:

Significance_premierleaguehomeadvantage_chart_2

I find this bar chart challenging.

It plots the ratio of home wins to away wins using an odds scale, which is not intuitive. The odds scale (probability of success divided by probability of failure) runs from 0 to positive infinity, with 1 being a special value indicating equal odds. But all the values for which away wins exceed home wins are squeezed into the interval between 0 and 1 while the values for which home wins exceed away wins are laid out between 1 and infinity. So it's an inherently asymmetric graphic for a symmetric formula.

The section labeled "more away wins than home wins" are filled with red bars for all those seasons with positive home field advantage while the most recent season, the outlier, has a shorter bar in that section than the rest.

Here's an alternative view:

Redo_significance_premierleaguehomeawaywins_2

I have incorporated dual axes here - but both axes are different only by scaling. There are 380 games in a Premier League season so the percentage scale is just a re-expression of the counts.

 

 


Charts that ask questions about the German election

In the prior post about Canadian elections, I suggested that designers expand beyond plots of one variable at a time. Today, I look at a project by DataWrapper on the German elections which happened this week. Thanks to long-time blog supporter Antonio for submitting the chart.

The following is the centerpiece of Lisa's work:

Datawrapper_germanelections_cducsu

CDU/CSU is Angela Merkel's party, represented by the black color. The chart answers one question only: did polls correctly predict election results?

The time period from 1994 to 2021 covers eight consecutive elections (counting the one this week). There are eight vertical blocks on the chart representing each administration. The right vertical edge of each block coincides with an election. The chart is best understood as the superposition of two time series.

You can trace the first time series by following a step function - let your eyes follow the flat lines between elections. This dataset shows the popular vote won by the party at each election, with the value updated after each election. The last vertical block represents an election that has not yet happened when this chart was created. As explained in the footnote, Lisa took the average poll result for the last month leading up to the 2021 election - in the context of this chart, she made the assumption that this cycle of polls will be 100% accurate.

The second time series corresponds to the ragged edges of the gray and black areas. If you ignore the colors, and the flat lines, you'll discover that the ragged edges form a contiguous data series. This line encodes the average popularity of the CDU/CSU party according to election polls.

Thus, the area between the step function and the ragged line measures the gap between polls and election day results. When the polls underestimate the actual outcome, the area is colored gray; when the polls are over-optimistic, the area is colored black. In the last completed election of 2017, Merkel's party underperformed relative to the polls. In fact, the polls in the entire period between the 2013 and 2017 uniformly painted a rosier picture for CDU/CSU than actually happened.

The last vertical block is interpreted a little differently. Since the reference level is the last month of polls (rather than the actual popular vote), the abundance of black indicates that Merkel's party has been suffering from declining poll numbers on the approach of this week's election.

***

The picture shown above seems to indicate that these polls are not particularly good. It appears they have limited ability to self-correct within each election cycle. Aside from the 1998-2002 period, the area colors seldom changed within each cycle. That means if the first polling average overestimated the party's popularity, then all subsequent polling averages were also optimistic. (The original post focused on a single pollster, which exacerbates this issue. Compare the following chart with the above, and you'll find even fewer color changes within cycle here:

Datawrapper_germanelections_cdu_singlepoll

Each pollster may be systematically biased but the poll aggregate is less so.)

 

Here's the chart for SDP, which is CDU/CSU's biggest opponent, and likely winner of this week's election:

Datawrapper_germanelections_spd

Overall, this chart has similar features as the CDU/CSU chart. The most recent polls seem to favor the SPD - the pink area indicates that the older polls of this cycle underestimates the last month's poll result.

Both these parties are in long-term decline, with popularity dropping from the 40% range in the 1990s to the 20% range in the 2020s.

One smaller party that seems to have gained followers is the Green party:

Datawrapper_germanelections_green

The excess of dark green, however, does not augur well for this election.

 

 

 

 

 


Tongue in cheek but a master stroke

Andrew jumped on the Benford bandwagon to do a tongue-in-cheek analysis of numbers in Hollywood movies (link). The key graphic is this:

Gelman_hollywood_benford_2-1024x683

Benford's Law is frequently invoked to prove (or disprove) fraud with numbers by examining the distribution of first digits. Andrew extracted movies that contain numbers in their names - mostly but not always sequences of movies with sequels. The above histogram (gray columns) are the number of movies with specific first digits. The red line is the expected number if Benford's Law holds. As typical of such analysis, the histogram is closely aligned with the red line, and therefore, he did not find any fraud. 

I'll blog about my reservations about Benford-style analysis on the book blog later - one quick point is: as with any statistical analysis, we should say there is no statistical evidence of fraud (more precisely, of the kind of fraud that can be discovered using Benford's Law), which is different from saying there is no fraud.

***

Andrew also showed a small-multiples chart that breaks up the above chart by movie groups. I excerpted the top left section of the chart below:

Gelman_smallmultiples_benford

The genius in this graphic is easily missed.

Notice that the red lines (which are the expected values if Benford Law holds) appear identical on every single plot. And then notice that the lines don't represent the same values.

It's great to have the red lines look the same everywhere because they represent the immutable Benford reference. Because the number of movies is so small, he's plotting counts instead of proportions. If you let the software decide on the best y-axis range for each plot, the red lines will look different on different charts!

You can find the trick in the R code from Gelman's blog.

First, the maximum value of each plot is set to the total number of observations. Then, the expected Benford proportions are converted into expected Benford counts. The first Benford count is then shown against an axis topping out at the total count, and thus, relatively, what we are seeing are the Benford proportions. Thus, every red line looks the same despite holding different values.

This is a master stroke.