More on equal-area histograms

Today, I'm returning to those "equal-area histograms" that Andrew wrote about last month. I have two previous posts about this. The first post introduces the concept: in a traditional histogram, the columns have the same bin width while the column heights can represent a variety of metrics, such as counts, relative frequencies (i.e. proportion of the data) and densities; in the equal-area histogram, the columns have varying widths while the area of each column is constant, and determined by the number of bins (columns).

HJunkcharts_histogram_percentogram_priorpostere is a comparison of the two types of histograms.

In a second post, I explained the differences between using counts, frequencies and densities in the vertical axis. The underlying issue is that the histogram is not merely a column chart, in which the width of the columns is arbitrary and data-free - in the histogram, both the heights and widths of columns carry meaning. One feature of the histogram that almost everyone expects is that the area of the columns sum up to 1. This aligns with a desired interpretation of probabilities of data falling into specified ranges, as we'd like the amount of data in the entire range to add up to 100%. Unfortunately, the two items are usually incompatible with each other.

If the height of the columns represents the probability of data falling into the range as indicated by its width, then the sum of the column heights is 1, which implies that the sum of the column areas cannot be 1. On the other hand, if the column areas add up to 1, then the column heights will not add up to 1, and thus, in this scenario, we cannot interpret the column heights to be probabilities. As explained in the second post, the column heights in this situation are densities, which can be defined as the proportion of data divided by the bin width. Intuitively, it gives information on how dense or sparse the data are within the specified range.

***

Today's post start with a toy dataset, containing randomly generated values from a normal distribution (bell curve) centered at 4 and with standard deviation 1.

Here is the traditional histogram of the dataset, using 100 equal-width bin. (I generated 10,000 values)

Histogram_normals

Four_precentograms_normalsNext, I created a panel of four equal-area histograms, with increasingly number of bins. Each is built from the same underlying dataset.

The first histogram divides the data into 4 bins; then 10 bins, 20 bins and 100 bins.

In the 4-bin case, each column contains 1/4 = 25% of the data. The middle two columns contain 50% of the data, and they have high densities, as the widths of these columns are low. It's a crude approximation of the familiar bell curve.

As we increase the number of bins, the columns in the middle of the distribution, where most of the data are concentrated, become narrower. In the sparse regions, the column width doesn't necessarily grow because each column must contain 1/n of the data, where n is the number of columns. As the number of columns increases, each column contains less of the data.

The bottom chart is the "percentogram", which is what Andrew's correspondent proposed. The number of bins is set to 100, so each column contains exactly 1 percent of the data. For a normal distribution, the columns in the middle are very tall and thin.

The reason why the middle of the percentogram looks faded is that I asked for a white border around each column. But when the columns are so thin, even if one sets the border width very small, what readers see is a mixture of orange and white.

With high number of bins, we notice a few things: a) the outline of the histogram becomes "ragged" (the more bins there are), b) the middle columns become razor-thin c) the width conceded by the middle columns is absorbed not by the columns at the edges but those between the peak and the edge.

I'm struggling a bit to justify this percentogram versus the typical, equal-width histogram.

Let me go down a different path.

***

In "principled" histograms, the column heights represent data densities, while the total area of the columns add up to 1. This leads us to a new understanding of the relationship between the equal-width histogram and the equal-area histogram.

We start with data density defined by (proportion of data) / (bin width). Those two values are not independent - one is fully determined by the other, given the underlying dataset. In a traditional equal-width histogram, the question is: how much of the data is found in a column of fixed width? In the new equal-area histogram, the question is: how wide is the bin that contains a fixed amount of data? In the former, the denominator is fixed while the numerator varies; the opposite occurs in the latter.

***

We also recognize that given the range of the data, there is a relationship between the the set of bin widths in the two types of histograms. In the traditional histogram, all bin widths have the same value, equal to the range of the data divided by the number of bins. Think of this as the average bin width. In an equal-area histogram, the set of bin widths varies: however, the sum of the bin widths must still add up to the range of the data. For two comparable histograms with the same number of bins, the average of the bin widths must be the same for both sets. (I'm ignoring any rounding situations in which the range of the histogram is larger than the range of the data.)

Now, consider the middle of the normal distribution where the data are dense. In the traditional histogram, the column in the middle still has width equal to the average bin width. In the equal-area histogram, the middle column has width much smaller than the average bin width. In other words, we can think of the column in the traditional histogram being broken up into many thin and slim columns in the equal-area histogram, each containing 1% of the data in the case of the percentogram.

The height of the column is the data density. In the traditional histogram, the middle column is the pooled sample of larger size; in the equal-area histogram, each of those thin and slim columns is a partition of the sample. This explains observation (a) above in which the outline of the equal-area histogram is more ragged - it's because each column contains fewer data from which to estimate the data density.

But this raggedness is artificial, sampling noise.

***

The sparse areas are more complicated still. It's also the reverse of the above. On the edges of the normal distribution, the columns of the new histogram are wider than those of the traditional histogram. So, we can think of breaking up the edge column of the new histogram into multiple columns of the traditional histogram.

The interpretation is more complicated because the data are sparse in this region. Obviously, the estimates of density on the traditional histogram in sparse regions are poor because not enough data reside in there. The density estimate on the new histogram is based on a larger sample size.

However.

Yes, however, whether the new histogram's density estimate is better depends on the shape of the tail of the distribution. A normal distribution has exponential tails, which means that the data density declines quite drastically the further we go into the tail. Therefore, the new histogram averages the data densities across a large part of the tail, wiping out the exponential shape while the traditional histogram preserves that shape - at the expense of greater sampling variability due to smaller sample sizes.

***

For what it's worth, let's look at some histograms for an exponential random variable.

Here is the traditional histogram:

Histogram_expos

The data are extremely dense on the left side while it has a long tail on the right side.

Four_percentograms_exposHere are the four equal-area histograms for 4, 10, 20 and 100 bins.

The four-bin version gives a nice summary of the shape. As the number of bins goes up, as before, the denser regions now have tall, thin spikes. Again, because of the white borders, the last histogram with 100 bins is faded where the data are densest. (So obviously, don't follow my lead, and eliminate borders if you want to use it.)

The 100-bin version looks almost the same as the traditional histogram.

***

At this stage of the exploration, I still haven't found a compelling reason to switch to equal-area hist0grams. In the denser regions, it's adding sampling noise. If I don't care about the sparser areas, specifically, the shape of the tails, maybe they provide a cleaner presentation.

 


The one thing you're afraid to ask about histograms

In the previous post about a variant of the histogram, I glossed over a few perplexing issues - deliberately. Today's post addresses one of these topics: what is going on in the vertical axis of a histogram?

The real question is: what data are encoded in the histogram, and where?

***

Let's return to the dataset from the last post. I grabbed data from a set of international football (i.e. soccer) matches. Each goal scored has a scoring minute. If the goal is scored in regulation time, the scoring minute is a number between 1 and 90 minutes. Specifically, the data collector applies a rounding up: any goal scored between 0 and 60 seconds is recorded as 1, all the way up to a goal scored between 89 and 90th minute being recorded as 90. In this post, I only consider goals scored in regulation time so the horizontal axis is between 1-90 minutes.

The kneejerk answer to the posed question is: counts in bins. Isn't it the case that in constructing a histogram, we divide the range of values (1-90) into bins, and then plot the counts within bins, i.e. the number of goals scored within each bin of minutes?

The following is what we have in mind:

Junkcharts_counthistogram_1

Let's call this the "count histogram".

Some readers may dislike the scale of the vertical axis, as its interpretation hinges on the total sample size. Hence, another kneejerk answer is: frequencies in bins. Instead of plotting counts directly, plot frequencies, which are just standardized counts. Just divide each value by the sample size. Here's the "frequency histogram":

Junkcharts_freqhistogram_1

The count and frequency histograms are identical except for the scale, and appear intuitively clear. The count and frequency data are encoded in the heights of the columns. The column widths are an afterthought, and they adhere to a fixed constant. Unlike a column chart, typically the gap width in a histogram is zero, as we want to partition the horizontal range into adjoining sections.

Now, if you look carefully at the histogram from the last post, reproduced below, you'd find that it plots neither counts nor frequencies:

Junkcharts_densityhistogram_1

The numbers on the axis are fractions, and suggest that they may be frequencies, but a quick check proves otherwise: with 9 columns, the average column should contain at least 10 percent of the data. The total of the displayed fractions is nowhere near 100%, which is our expectation if the values are relative frequencies. You may have come across this strangeness when creating histograms using R or some other software.

The purpose of this post is to explain what values are being plotted and why.

***

What are the kinds of questions we like to answer about the distribution of data?

At a high level, we want to know "where are my data"?

Arguably these two questions are fundamental:

  • what is the probability that the data falls within a given range of values? e.g., what is the probability that a goal is scored in the first 15 minutes of a football match?
  • what is the relative probability of data between two ranges of values? e.g. are teams more likely to score in last 5 minutes of the first half or the last five minutes of the second half of a football match?

In a histogram, the first question is answered by comparing a given column to the entire set of columns while the second question is answered by comparing one column to another column.

Let's see what we can learn from the count histogram.

Junkcharts_counthistograms_questions

In a count histogram, the heights encode the count data. To address the relative probability question, we note that the ratio of heights is the ratio of counts, and the ratio of counts is the same as the ratio of frequencies. Thus, we learn that teams are roughly 3000/1500 = 1.5 times more likely to score in the last 5 minutes of the second half than during the last 5 minutes of the first half. (See the green columns).

[For those who follow football, it's clear that the data collector treated goals scored during injury time of either half as scored during the last minute of the half, so this dataset can't be used to analyze timing of goals unless the real minutes were recorded for injury-time goals.]

To address the range probability question, we compare the aggregate height of the three orange columns with the total heights of all columns. Note that I said "height", not "area," because the heights directly encode counts. It's actually taxing to figure out the total height!

We resort to reading the total area of all columns. This should yield the correct answer: the area is directly proportional to the height because the column widths are fixed as a constant. Bear in mind, though, if the column widths vary (the theme of the last post), then areas and heights are not interchangable concepts.

Estimating the total area is still not easy, especially if the column heights exhibit high variance. What we need is the proportion of the total area that is orange. It's possible to see, not easy.

You may interject now to point out that the total area should equal the aggregate count (sample size). But that is a fallacy! It's very easy to make this error. The aggregate count is actually the total height, and because of that, the total area is the aggregate count multiplied by the column width! In my example, the total height is 23,682, which is the number of goals in the dataset, while the total area is 23,682 times 5 minutes.

[For those who think in equations, the total area is the sum over all columns of height(i) x width(i). When width is constant, we can take it outside the sum, and the sum of height(i) is just the total count.]

***

The count histogram is hard to use because it requires knowing the sample size. It's the first thing that is produced because the raw data are counts in bins. The frequency histogram is better at delivering answers.

In the frequency histogram, the heights encode frequency data. We can therefore just read off the relative probability of the orange column, bypassing the need to compute the total area.

This workaround actually promotes the fallacy described above for the count histogram. It is easy to fall into the trap of thinking that the total area of all columns is 100%. It isn't.

Similar to before, the total height should be the total frequency but the total area is the total frequency multipled by the column width, that is to say, the total area is the reciprocal of the bin width. In the football example, using 5-minute intervals, the total area of the frequency histogram is 1/(5 minutes) in the case of equal bin widths.

How about the relative probability question? On the frequency histogram, the ratio of column heights is the ratio of frequencies, which is exactly what we want. So long as the column width is constant, comparing column heights is easy.

***

One theme in the above discussion is that in the count and frequency histograms, the count and frequency data are encoded in the column heights but not the column areas. This is a source of major confusion. Because of the convention of using equal column widths, one treats areas and heights as interchangable... but not always. The total column area isn't the same as the total column height.

This observation has some unsettling implications.

As shown above, the total area is affected by the column width. The column width in an equal-width histogram is the range of the x-values divided by the number of bins. Thus, the total area is a function of the number of bins.

Consider the following frequency histograms of the same scoring minutes dataset. The only difference is the number of bins used.

Junkcharts_freqhistogram_differentbins

Increasing the number of bins has a series of effects:

  • the columns become narrower
  • the columns become shorter, because each narrower bin can contain at most the same count as the wider bin that contains it.
  • the total area of the columns become smaller.

This last one is unexpected and completely messes up our intuition. When we increase the number of bins, not only are the columns shortening but the total area covered by all the columns is also shrinking. Remember that the total area whether it is a count or frequency histogram has a factor equal to the bin width. Higher number of bins means smaller bin width, which means smaller total area.

***

What if we force the total area to be constant regardless of how many bins we use? This setting seems more intuitive: in the 5-bin histogram, we partition the total area into five parts while in the 10-bin histogram, we divide it into 10 parts.

This is the principle used by R and the other statistical software when they produce so-called density histograms. The count and frequency data are encoded in the column areas - by implication, the same data could not have been encoded simultaneously in the column heights!

The way to accomplish this is to divide by the bin width. If you look at the total area formulas above, for the count histogram, total area is total count x bin width. If the height is count divided by bin width, then the total area is the total count. Similarly, if the height in the frequency histogram is frequency divided by bin width, then the total area is 100%.

Count divided by some section of the x-range is otherwise known as "density". It captures the concept of how tightly the data are packed inside a particular section of the dataset. Thus, in a count-density histogram, the heights encode densities while the areas encode counts. In this case, total area is the total count. If we want to standardize total area to be 1, then we should compute densities using frequencies rather than counts. Frequency densities are just count densities divided by the total count.

To summarize, in a frequency-density histogram, the heights encode densities, defined as frequency divided by the bin width. This is not very intuitive; just think of densities as how closely packed the data are in the specified bin. The column areas encode frequencies so that the total area is 100%.

The reason why density histograms are confusing is that we are reading off column heights while thinking that the total area should add up to 100%. Column heights and column areas cannot both add up to 100%. We have to pick one or the other.

Comparing relative column heights still works when the density histogram has equal bin widths. In this case, the relative height and relative area are the same because relative density equals relative frequencies if the bin width is fixed.

The following charts recap the discussion above. It shows how the frequency histogram does not preserve the total area when bin sizes are changed while the density histogram does.

Junkcharts_freqdensityhistograms_differentbins

***

The density histogram is a major pain for solving range probability questions because the frequencies are encoded in the column areas, not the heights. Areas are not marked out in a graph.

The column height gives us densities which are not probabilities. In order to retrieve probabilities, we have to multiply the density by the bin width, that is to say, we must estimate the area of the column. That requires mapping two dimensions (width, height) onto one (area). It is in fact impossible without measurement - unless we make the bin widths constant.

When we make the bin widths constant, we still can't read densities off the vertical axis, and treat them as probabilities. If I must use the density histogram to answer the question of how likely a team scores in the first 15 minutes, I'd sum the heights of the first 3 columns, which is about 0.025, and then multiply it by the bin width of 5 minutes, which gives 0.125 or 12.5%.

At the end of this exploration, I like the frequency histogram best. The density histogram is useful when we are comparing different histograms, which isn't the most common use case.

***

The histogram is a basic chart in the tool kit. It's more complicated than it seems. I haven't come across any intro dataviz books that explain this clearly.

Most of this post deals with equal-width histograms. If we allow bin widths to vary, it gets even more complicated. Stay tuned.

***

For those using base R graphics, I hope this post helps you interpret what they say in the manual. The default behavior of the "hist" function depends on whether the bins are equal width:

  • if the bin width is constant, then R produces a count histogram. As shown above, in a count histogram, the column heights indicate counts in bins but the total column area does not equal the total sample size, but the total sample size multiplied by the bin width. (Equal width is the default unless the user specifies bin breakpoints.)
  • if the bin width is not constant, then R produces a (frequency-)density histogram. The column heights are densities, defined as frequencies divided by bin width while the column areas are frequencies, with the total area summing to 100%.

Unfortunately, R does not generate a frequency histogram. To make one, you'd have to divide the counts in bins by the sum of counts. (In making some of the graphs above, I tricked it.) You also need to trick it to make a frequency-density histogram with equal-width bins, as it's coded to produce a count histogram when bin size is fixed.

 

P.S. [5-2-2023] As pointed out by a reader, I should clarify that R and I use the word "frequency" differently. Specifically, R uses frequency to mean counts, therefore, what I have been calling the "count histogram", R would have called it a "frequency histogram", and what I have been describing as a "frequency histogram", the "hist" function simply does not generate it unless you trick it to do so. I'm using "frequency" in the everyday sense of the word, such as "the frequency of the bus". In many statistical packages, frequency is used to mean "count", as in the frequency table which is just a table of counts. The reader suggested proportion which I like, or something like weight.

 

 

 

 

 


Area chart is not the solution

A reader left a link to a Wiki chart, which is ghastly:

House_Seats_by_State_1789-2020_Census

This chart concerns the trend of relative proportions of House representatives in the U.S. Congress by state, and can be found at this Wikipedia entry. The U.S. House is composed of Representatives, and the number of representatives is roughly proportional to each state's population. This scheme actually gives small states disporportional representation, since the lowest number of representatives is 1 while the total number of representatives is fixed at 435.

We can do a quick calculation: 1/435 = 0.23% so any state that has less than 0.23% of the population is over-represented in the House. Alaska, Vermont and Wyoming are all close to that level. The primary way in which small states get larger representation is via the Senate, which sits two senators per state no matter the size. (If you've wondered about Nate Silver's website: 435 Representatives + 100 Senators + 3 for DC = 538 electoral votes for U.S. Presidental elections.)

***

So many things have gone wrong with this chart. There are 50 colors for 50 states. The legend arranges the states by the appropriate metric (good) but in ascending order (bad). This is a stacked area chart, which makes it very hard to figure out the values other than the few at the bottom of the chart.

A nice way to plot this data is a tile map with line charts. I found a nice example that my friend Xan put together in 2018:

Xang_cdcflu_tilemap_lines

A tile map is a conceptual representation of the U.S. map in which each state is represented by equal-sized squares. The coordinates of the states are distorted in order to line up the tiles. A tile map is a small-multiples setup in which each square contains a chart of the same design to faciliate inter-state comparisons.

In the above map, Xan also takes advantage of the foregrounding concept. Each chart actually contains all 50 lines for every state, all shown in gray while the line for the specific state is bolded and shown in red.

***

A chart with 50 lines looks very different from one with 50 areas stacked on each other. California, the most populous state, has 12% of the total population so the line chart has 50 lines that will look like spaghetti. Thus, the fore/backgrounding is important to make sure it's readable.

I suspect that the designer chose a stacked area chart because the line chart looked like spaghetti. But that's the wrong solution. While the lines no longer overlap each other, it is a real challenge to figure out the state-level trends - one has to focus on the heights of the areas, rather than the boundary lines.

[P.S. 2/27/2023] As we like to say, a picture is worth a thousand words. Twitter reader with the handle LHZGJG made the tile map I described above. It looks like this:

Lhzgjg_redo_houseapportionment

You can pick out the states with the key changes really fast. California, Texas, Florida on the upswing, and New York, Pennsylvania going down. I like the fact that the state names are spelled out. Little tweaks are possible but this is a great starting point. Thanks LHZGJG! ]

 


Modern design meets dataviz

This chart was submitted via Twitter (thanks John G.).

OptimisticEstimatingHomeValue

Perhaps the designer is inspired by this:

Royalontariomuseum

That's the Royal Ontario Museum, one of the beautiful landmarks in Toronto.

***

The chart addresses an interesting question - how much do home buyers over or under-estimate home value?  That said, gathering data to answer this question is challenging. I won't delve into this issue in this post.

Let's ask where readers are looking for data on the chart. It appears that we should use the right edge of each triangle. While the left edge of the red triangle might be useful, the left edges of the other triangles definitely would not contain data.

Note that, like modern architecture, the designer is playing with edges. None of the four right edges is properly vertical - none of the lines cuts the horizontal axis at a right angle. So the data actually reside in the imaginary vertical lines from the apexes to the horizontal baseline.

Where is the horizontal baseline? It's not where it is drawn either. The last number in the series is a negative number and so the real baseline is in the middle of the plot area, where the 0% value is.

The following chart shows (left side) the misleading signals sent to readers and (right side) the proper way to consume the data.

Redo_rockethomes_priceprojection

The degree of distortion is quite extreme. Only the fourth value is somewhat accurate, albeit by accident.

The design does not merely perturb the chart; it causes a severe adverse reaction.

 

P.S. [9/19/2022] Added submitter name.

 

 

 


Here's a radar chart that works, sort of

In the same Reuters article that featured the speedometer chart which I discussed in this blog post (link), the author also deployed a small multiples of radar charts.

These radar charts are supposed to illustrate the article's theme that "European countries are racing to fill natural gas storage sites ahead of winter."

Here's the aggregate chart that shows all countries:

Reuters_gastorage_radar_details

In general, I am not a fan of radar charts. When I first looked at this chart, I also disliked it. But keep reading because I eventually decided that this usage is an exception. One just needs to figure out how to read it.

One reason why I dislike radar charts is that they always come with a lot of non-data-ink baggage. We notice that the months of the year are plotted in a circle starting at the top. They marked off the start of the war on Feb 24, 2022 in red. Then, they place the dotted circle, which represents the 80% target gas storage amount.

The trick is to avoid interpreting the areas, or the shapes of the blue and gray patches. I know, they look cool and grab our attention but in the context of conveying data, they are meaningless.

Redo_reuters_eugasradarall_1Instead of areas, focus on the boundaries of those patches. Don't follow one boundary around the circle. Pick a point in time, corresponding to a line between the center of the circle and the outermost circle, and look at the gap between the two lines. In the diagram shown right, I marked off the two relevant points on the day of the start of the war.

From this, we observe that across Europe, the gas storage was far less than the 80% target (recently set).

By comparing two other points (the blue and gray boundaries), we see that during February, Redo_reuters_eugasradarall_2gas storage is at a seasonal low, and in 2022, it is on the low side of the 5-year average. 

However, the visual does not match well with the theme of the article! While the gap between the blue and gray boundaries decreased since the start of the war, the blue boundary does not exceed the historical average, and does not get close to 80% until August, a month in which gas storage reaches 80% in a typical year.

This is example of a chart in which there is a misalignment between the Q and the V corners of the Trifecta Checkup (link).

_trifectacheckup_image

The question/message is that Europeans are reacting to the war by increasing their gas storage beyond normal. The visual actually says that they are increasing the gas storage as per normal.

***

As I noted before, when read in a particular way, these radar charts serve their purpose, which is more than can be said for most radar charts.

The designer made several wise choices:

Instead of drawing one ring for each year of data, the designer averaged the past 5 years and turned that into one single ring (patch). You can imagine what this radar chart would look like if the prior data were not averaged: hoola hoop mania!

Marawa-bgt

Simplifying the data in this way also makes the small multiples work. The designer uses the aggregate chart as a legend/how to read this. And in a further section below, the designer plots individual countries, without the non-data-ink baggage:

Reuters_gastorage_mosttofill

Thanks againto longtime reader Antonio R. who submitted this chart.

Happy Labor Day weekend for those in the U.S.!

 

 

 


Superb tile map offering multiple avenues for exploration

Here's a beauty by WSJ Graphics:

Wsj_powerproduction

The article is here.

This data graphic illustrates the power of the visual medium. The underlying dataset is complex: power production by type of source by state by month by year. That's more than 90,000 numbers. They all reside on this graphic.

Readers amazingly make sense of all these numbers without much effort.

It starts with the summary chart on top.

Wsj_powerproduction_us_summary

The designer made decisions. The data are presented in relative terms, as proportion of total power production. Only the first and last years are labeled, thus drawing our attention to the long-term trend. The order of the color blocks is carefully selected so that the cleaner sources are listed at the top and the dirtier sources at the bottom. The order of the legend labels mirrors the color blocks in the area chart.

It takes only a few seconds to learn that U.S. power production has largely shifted away from coal with most of it substituted by natural gas. Other than wind, the green sources of power have not gained much ground during these years - in a relative sense.

This summary chart serves as a reading guide for the rest of the chart, which is a tile map of all fifty states. Embedded in the tile map is a small-multiples arrangement.

***

The map offers multiple avenues for exploration.

Some readers may look at specific states. For example, California.

Wsj_powerproduction_california

Currently, about half of the power production in California come from natural gas. Notably, there is no coal at all in any of these years. In addition to wind, solar energy has also gained. All of these insights come without the need for any labels or gridlines!

Wsj_powerproduction_westernstatesBrowsing around California, readers find different patterns in other Western states like Oregon and Washington.

Hydroelectric energy is the dominant source in those two states, with wind gradually taking share.

At this point, readers realize that the summary chart up top hides remarkable state-level variations.

***

There are other paths through the map.

Some readers may scan the whole map, seeking patterns that pop out.

One such pattern is the cluster of states that use coal. In most of these states, the proportion of coal has declined.

Yet another path exists for those interested in specific sources of power.

For example, the trend in nuclear power usage is easily followed by tracking the purple. South Carolina, Illinois and New Hampshire are three states that rely on nuclear for more than half of its power.

Wsj_powerproduction_vermontI wonder what happened in Vermont about 8 years ago.

The chart says they renounced nuclear energy. Here is some history. This one-time event caused a disruption in the time series, unique on the entire map.

***

This work is wonderful. Enjoy it!


Think twice before you spiral

After Nathan at FlowingData sang praises of the following chart, a debate ensued on Twitter as others dislike it.

Nyt_spiral_covidcases

The chart was printed in an opinion column in the New York Times (link).

I have found few uses for spiral charts, and this example has not changed my mind.

The canonical time-series chart is like this:

Junkcharts_redo_nyt_covidcasesspiral_1

 

***

The area chart takes no effort to understand. We can see when the peaks occurred. We notice that the current surge is already double the last peak seen a year ago.

It's instructive to trace how one gets from the simple area chart to the spiral chart.

Junkcharts_redo_nyt_covidcasesspiral_2

Step 1 is to center the area on the zero baseline, instead of having the zero baseline as the baseline. While this technique frequently makes for a more pleasant visual (because of our preference for symmetry), it actually makes it harder to see the trend over time. Effectively, any change is split in half, which is why the envelope of the area is less sharp.

Junkcharts_redo_nyt_covidcasesspiral_3

In Step 2, I massively compress the vertical scale. That's because when you plot a spiral, you are forced to fit each cycle of data into a much shorter range. Such compression causes the year on year doubling of cases to appear less dramatic. (Actually, the aspect ratio is devastated because while the vertical scale is hugely compressed, the horizontal scale is dramatically stretched out due to the curled up design)

Junkcharts_redo_nyt_covidcasesspiral_4

Step 3 may elude your attention. If you simply curl up the compressed, centered area chart, you don't get the spiral chart. The key is to ask about the radius of the spiral. As best I can tell, the radius has no meaning; it is gradually increased so that each year of data has its own "orbit". What would the change in radius translate to on our non-circular chart? It should mean that the center of the area is gradually lifted away from the zero line. On the right chart, I mimic this effect (I only measured the change in radius every 3 months so the change is more angular than displayed in the spiral chart.) The problem I have with this Step is that it serves no purpose, while it complicates cognition,

In Step 4, just curl up the object into a ball based on aligning months of the year.

Junkcharts_redo_nyt_covidcasesspiral_5

This is the point when I realized I missed a Step 2B. I carefully aligned the scales of both charts so that the 150K cases shown in the legend on the right have the same vertical representation as on the left. This exposes a severe horizontal rescaling. The length of the horizontal axis on the left chart is many times smaller than the circumference of the spiral! That's why earlier, I said one of the biggest feature of this spiral chart is that it imposes a dubious aspect ratio, that is extremely wide and extremely short.

As usual, think twice before you spiral.

 

 


Charts that ask questions about the German election

In the prior post about Canadian elections, I suggested that designers expand beyond plots of one variable at a time. Today, I look at a project by DataWrapper on the German elections which happened this week. Thanks to long-time blog supporter Antonio for submitting the chart.

The following is the centerpiece of Lisa's work:

Datawrapper_germanelections_cducsu

CDU/CSU is Angela Merkel's party, represented by the black color. The chart answers one question only: did polls correctly predict election results?

The time period from 1994 to 2021 covers eight consecutive elections (counting the one this week). There are eight vertical blocks on the chart representing each administration. The right vertical edge of each block coincides with an election. The chart is best understood as the superposition of two time series.

You can trace the first time series by following a step function - let your eyes follow the flat lines between elections. This dataset shows the popular vote won by the party at each election, with the value updated after each election. The last vertical block represents an election that has not yet happened when this chart was created. As explained in the footnote, Lisa took the average poll result for the last month leading up to the 2021 election - in the context of this chart, she made the assumption that this cycle of polls will be 100% accurate.

The second time series corresponds to the ragged edges of the gray and black areas. If you ignore the colors, and the flat lines, you'll discover that the ragged edges form a contiguous data series. This line encodes the average popularity of the CDU/CSU party according to election polls.

Thus, the area between the step function and the ragged line measures the gap between polls and election day results. When the polls underestimate the actual outcome, the area is colored gray; when the polls are over-optimistic, the area is colored black. In the last completed election of 2017, Merkel's party underperformed relative to the polls. In fact, the polls in the entire period between the 2013 and 2017 uniformly painted a rosier picture for CDU/CSU than actually happened.

The last vertical block is interpreted a little differently. Since the reference level is the last month of polls (rather than the actual popular vote), the abundance of black indicates that Merkel's party has been suffering from declining poll numbers on the approach of this week's election.

***

The picture shown above seems to indicate that these polls are not particularly good. It appears they have limited ability to self-correct within each election cycle. Aside from the 1998-2002 period, the area colors seldom changed within each cycle. That means if the first polling average overestimated the party's popularity, then all subsequent polling averages were also optimistic. (The original post focused on a single pollster, which exacerbates this issue. Compare the following chart with the above, and you'll find even fewer color changes within cycle here:

Datawrapper_germanelections_cdu_singlepoll

Each pollster may be systematically biased but the poll aggregate is less so.)

 

Here's the chart for SDP, which is CDU/CSU's biggest opponent, and likely winner of this week's election:

Datawrapper_germanelections_spd

Overall, this chart has similar features as the CDU/CSU chart. The most recent polls seem to favor the SPD - the pink area indicates that the older polls of this cycle underestimates the last month's poll result.

Both these parties are in long-term decline, with popularity dropping from the 40% range in the 1990s to the 20% range in the 2020s.

One smaller party that seems to have gained followers is the Green party:

Datawrapper_germanelections_green

The excess of dark green, however, does not augur well for this election.

 

 

 

 

 


Ridings, polls, elections, O Canada

Stephen Taylor reached out to me about his work to visualize Canadian elections data. I took a look. I appreciate the labor of love behind this project.

He led with a streamgraph, which presents a quick overview of relative party strengths over time.

Stephentaylor_canadianelections_streamgraph

I am no Canadian election expert, and I did a bare minimum of research in writing this blog. From this chart, I learn that:

  • the Canadians have an irregular election schedule
  • Canada has a two party plus breadcrumbs system
  • The two dominant parties are Liberals and Conservatives. The Liberals currently hold just less than half of the seats. The Conservatives have more than half of the seats not held by Liberals
  • The Conservative party (maybe) rebranded as "progressive conservative" for several decades. The Reform/Alliance party was (maybe) a splinter movement within the Conservatives as well.
  • Since the "width" of the entire stream increased over time, I'm guessing the number of seats has expanded

That's quite a bit of information obtained at a glance. This shows the power of data visualization. Notice Stephen didn't even have to include a "how to read this" box.

The streamgraph form has its limitations.

The feature that makes it more attractive than an area chart is its middle anchoring, resulting in a form of symmetry. The same feature produces erroneous intuition - the red patch draws out a declining trend; the reader must fight the urge to interpret the lines and focus on the areas.

The breadcrumbs are well hidden. The legend below discloses that the Green Party holds 3 seats currently. The party has never held enough seats to appear on the streamgraph though.

The bars showing proportions in the legend is a very nice touch. (The numbers appear messed up - I have to ask Stephen whether the seats shown are current values, or some kind of historical average.) I am a big fan of informative legends.

***

The next featured chart is a dot plot of polling results since 2020.

Stephentaylor_canadianelections_streamgraph_polls_dotplot

One can see a three-tier system: the two main parties, then the NDP (yellow) is the clear majority of the minority, and finally you have a host of parties that don't poll over 10%.

It looks like the polls are favoring the Conservatives over the Liberals in this election but it may be an election-day toss-up.

The purple dots represent "PPC" which is a party not found elsewhere on the page.

This chart is clear as crystal because of the structure of the underlying data. It just amazes me that the polls are so highly correlated. For example, across all these polls, the NDP has never once polled better than either the Liberals or the Conservatives, and in addition, it has never polled worse than any of the small parties.

What I'd like to see is a chart that merges the two datasets, addressing the question of how well these polls predicted the actual election outcomes.

***

The project goes very deep as Stephen provides charts for individual "ridings" (perhaps similar to U.S. precincts).

Here we see population pyramids for Vancouver Center, versus British Columbia (Province), versus Canada.

Stephentaylor_canadianelections_riding_populationpyramids

This riding has a large surplus of younger people in their twenties and thirties. Be careful about the changing scales though. The relative difference in proportions are more drastic than visually displayed because the maximum values (5%) on the Province and Canada charts are half that on the Riding chart (10%). Imagine squashing the Province and Canada charts to half their widths.

Analyses of income and rent/own status are also provided.

This part of the dashboard exhibits a problem common in most dashboards - they present each dimension of the data separately and miss out on the more interesting stuff: the correlation between dimensions. Do people in their twenties and thirties favor specific parties? Do richer people vote for certain parties?

***

The riding-level maps are the least polished part of the site. This is where I'm looking for a "how to read it" box.

Stephentaylor_canadianelections_ridingmaps_pollwinner

It took me a while to realize that the colors represent the parties. If I haven't come in from the front page, I'd have been totally lost.

Next, I got confused by the use of the word "poll". Clicking on any of the subdivisions bring up details of an actual race, with party colors, candidates and a donut chart showing proportions. The title gives a "poll id" and the name of the riding in parentheses. Since the poll id changes as I mouse over different subdivisions, I'm wondering whether a "poll" is the term for a subdivision of a riding. A quick wiki search indicates otherwise.

Stephentaylor_canadianelections_ridingmaps_donut

My best guess is the subdivisions are indicated by the numbers.

Back to the donut charts, I prefer a different sorting of the candidates. For this chart, the two most logical orderings are (a) order by overall popularity of the parties, fixed for all ridings and (b) order by popularity of the candidate, variable for each riding.

The map shown above gives the winner in each subdivision. This type of visualization dumps a lot of information. Stephen tackles this issue by offering a small multiples view of each party. Here is the Liberals in Vancouver.

Stephentaylor_canadianelections_ridingmaps_partystrength

Again, we encounter ambiguity about the color scheme. Liberals have been associated with a red color but we are faced with abundant yellow. After clicking on the other parties, you get the idea that he has switched to a divergent continuous color scale (red - yellow - green). Is red or green the higher value? (The answer is red.)

I'd suggest using a gray scale for these charts. The hardest decision is going to be the encoding between values and shading. Should each gray scale be different for each riding and each party?

If I were to take a guess, Stephen must have spent weeks if not months creating these maps (depending on whether he's full-time or part-time). What he has published here is a great start. Fine-tuning the issues I've mentioned may take more weeks or months more.

****

Stephen is brave and smart to send this project for review. For one thing, he's got some free consulting. More importantly, we should always send work around for feedback; other readers can tell us where our blind spots are.

To read more, start with this post by Stephen in which he introduces his project.


Hanging things on your charts

The Financial Times published the following chart that shows the rollout of vaccines in the U.K.

Ft_astrazeneca_uk_rollout

(I can't find the online link to the article. The article is titled "AstraZeneca and Oxford face setbacks and success as battle enters next phase", May 29/30 2021.)

This chart form is known as a "streamgraph", and it is a stacked area chart in disguise. 

The same trick can be applied to a column chart. See the "hanging" column chart below:

Junkcharts_hangingcolumns

The two charts show exactly the same data. The left one roots the columns at the bottom. The right one aligns the middle of the columns. 

I have rarely found these hanging charts useful. The realignment makes it harder to compare the sizes of the different column segments. On the normal stacked column chart, the yellow segments are the easiest to compare because they share the same base level. Even this is taken away from the reader on the right side.

Note also that the hanging version does not admit a vertical axis

The same comments apply to the streamgraph.

***

Nevertheless, I was surprised that the FT chart shown above actually works. The main message I learned was that initially U.K. primarily rolled out AstraZeneca and, to a lesser extent, Pfizer, shots while later, they introduced other vaccines, including Johnson & Johnson, Novavax, CureVac, Moderna, and "Other". 

I can also see that the supply of AstraZeneca has not changed much through the entire time window. Pfizer has grown to roughly the same scale as AstraZeneca. Moderna remains a small fraction of total shots. 

I can even roughly see that the total number of vaccinations has grown about six times from start to finish. 

That's quite a lot for one chart, so job well done!

There is one problem with the FT chart. It should have labelled end of May as "today". Half the chart is history, and the other half is the future.

***

For those following Covid-19 news, the FT chart is informative in a different way.

There is a misleading statement going around blaming the U.K.'s recent surge in cases on the Astrazeneca vaccine, claiming that the U.K. mostly uses AZ. This chart shows that from the start, about a third of the shots administered in the U.K. are Pfizer, and Pfizer's share has been growing over time. 

U.K. compared to some countries mostly using mRNA vaccines

Ourworldindata_cases

U.K. is almost back to the winter peak. That's because the U.K. is serious about counting cases. Look at the state of testing in these countries:

Ourworldindata_tests

What's clear about the U.S. case count is that it is kept low by cutting the number of tests by two-thirds, thus, our data now is once again severely biased towards severe cases. 

We can do a back-of-the-envelope calculation. The drop in testing may directly lead to a proportional drop in reported cases, thus removing 500 (asymptomatic, or mild) cases per million from the case count. The case count goes below 250 per million so the additional 200 or so reduction is due to other reasons such as vaccinations.