More on equal-area histograms

Today, I'm returning to those "equal-area histograms" that Andrew wrote about last month. I have two previous posts about this. The first post introduces the concept: in a traditional histogram, the columns have the same bin width while the column heights can represent a variety of metrics, such as counts, relative frequencies (i.e. proportion of the data) and densities; in the equal-area histogram, the columns have varying widths while the area of each column is constant, and determined by the number of bins (columns).

HJunkcharts_histogram_percentogram_priorpostere is a comparison of the two types of histograms.

In a second post, I explained the differences between using counts, frequencies and densities in the vertical axis. The underlying issue is that the histogram is not merely a column chart, in which the width of the columns is arbitrary and data-free - in the histogram, both the heights and widths of columns carry meaning. One feature of the histogram that almost everyone expects is that the area of the columns sum up to 1. This aligns with a desired interpretation of probabilities of data falling into specified ranges, as we'd like the amount of data in the entire range to add up to 100%. Unfortunately, the two items are usually incompatible with each other.

If the height of the columns represents the probability of data falling into the range as indicated by its width, then the sum of the column heights is 1, which implies that the sum of the column areas cannot be 1. On the other hand, if the column areas add up to 1, then the column heights will not add up to 1, and thus, in this scenario, we cannot interpret the column heights to be probabilities. As explained in the second post, the column heights in this situation are densities, which can be defined as the proportion of data divided by the bin width. Intuitively, it gives information on how dense or sparse the data are within the specified range.

***

Today's post start with a toy dataset, containing randomly generated values from a normal distribution (bell curve) centered at 4 and with standard deviation 1.

Here is the traditional histogram of the dataset, using 100 equal-width bin. (I generated 10,000 values)

Histogram_normals

Four_precentograms_normalsNext, I created a panel of four equal-area histograms, with increasingly number of bins. Each is built from the same underlying dataset.

The first histogram divides the data into 4 bins; then 10 bins, 20 bins and 100 bins.

In the 4-bin case, each column contains 1/4 = 25% of the data. The middle two columns contain 50% of the data, and they have high densities, as the widths of these columns are low. It's a crude approximation of the familiar bell curve.

As we increase the number of bins, the columns in the middle of the distribution, where most of the data are concentrated, become narrower. In the sparse regions, the column width doesn't necessarily grow because each column must contain 1/n of the data, where n is the number of columns. As the number of columns increases, each column contains less of the data.

The bottom chart is the "percentogram", which is what Andrew's correspondent proposed. The number of bins is set to 100, so each column contains exactly 1 percent of the data. For a normal distribution, the columns in the middle are very tall and thin.

The reason why the middle of the percentogram looks faded is that I asked for a white border around each column. But when the columns are so thin, even if one sets the border width very small, what readers see is a mixture of orange and white.

With high number of bins, we notice a few things: a) the outline of the histogram becomes "ragged" (the more bins there are), b) the middle columns become razor-thin c) the width conceded by the middle columns is absorbed not by the columns at the edges but those between the peak and the edge.

I'm struggling a bit to justify this percentogram versus the typical, equal-width histogram.

Let me go down a different path.

***

In "principled" histograms, the column heights represent data densities, while the total area of the columns add up to 1. This leads us to a new understanding of the relationship between the equal-width histogram and the equal-area histogram.

We start with data density defined by (proportion of data) / (bin width). Those two values are not independent - one is fully determined by the other, given the underlying dataset. In a traditional equal-width histogram, the question is: how much of the data is found in a column of fixed width? In the new equal-area histogram, the question is: how wide is the bin that contains a fixed amount of data? In the former, the denominator is fixed while the numerator varies; the opposite occurs in the latter.

***

We also recognize that given the range of the data, there is a relationship between the the set of bin widths in the two types of histograms. In the traditional histogram, all bin widths have the same value, equal to the range of the data divided by the number of bins. Think of this as the average bin width. In an equal-area histogram, the set of bin widths varies: however, the sum of the bin widths must still add up to the range of the data. For two comparable histograms with the same number of bins, the average of the bin widths must be the same for both sets. (I'm ignoring any rounding situations in which the range of the histogram is larger than the range of the data.)

Now, consider the middle of the normal distribution where the data are dense. In the traditional histogram, the column in the middle still has width equal to the average bin width. In the equal-area histogram, the middle column has width much smaller than the average bin width. In other words, we can think of the column in the traditional histogram being broken up into many thin and slim columns in the equal-area histogram, each containing 1% of the data in the case of the percentogram.

The height of the column is the data density. In the traditional histogram, the middle column is the pooled sample of larger size; in the equal-area histogram, each of those thin and slim columns is a partition of the sample. This explains observation (a) above in which the outline of the equal-area histogram is more ragged - it's because each column contains fewer data from which to estimate the data density.

But this raggedness is artificial, sampling noise.

***

The sparse areas are more complicated still. It's also the reverse of the above. On the edges of the normal distribution, the columns of the new histogram are wider than those of the traditional histogram. So, we can think of breaking up the edge column of the new histogram into multiple columns of the traditional histogram.

The interpretation is more complicated because the data are sparse in this region. Obviously, the estimates of density on the traditional histogram in sparse regions are poor because not enough data reside in there. The density estimate on the new histogram is based on a larger sample size.

However.

Yes, however, whether the new histogram's density estimate is better depends on the shape of the tail of the distribution. A normal distribution has exponential tails, which means that the data density declines quite drastically the further we go into the tail. Therefore, the new histogram averages the data densities across a large part of the tail, wiping out the exponential shape while the traditional histogram preserves that shape - at the expense of greater sampling variability due to smaller sample sizes.

***

For what it's worth, let's look at some histograms for an exponential random variable.

Here is the traditional histogram:

Histogram_expos

The data are extremely dense on the left side while it has a long tail on the right side.

Four_percentograms_exposHere are the four equal-area histograms for 4, 10, 20 and 100 bins.

The four-bin version gives a nice summary of the shape. As the number of bins goes up, as before, the denser regions now have tall, thin spikes. Again, because of the white borders, the last histogram with 100 bins is faded where the data are densest. (So obviously, don't follow my lead, and eliminate borders if you want to use it.)

The 100-bin version looks almost the same as the traditional histogram.

***

At this stage of the exploration, I still haven't found a compelling reason to switch to equal-area hist0grams. In the denser regions, it's adding sampling noise. If I don't care about the sparser areas, specifically, the shape of the tails, maybe they provide a cleaner presentation.

 


Graph workflow and defaults wreak havoc

For the past week or 10 days, every time I visited one news site, it insisted on showing me an article about precipitation in North Platte. It's baiting me to write a post about this lamentable bar chart (link):

Northplatte_rainfall

***

This chart got problems, and the problems start with the tooling, which dictates a workflow.

I imagine what the chart designer had to deal with.

For a bar chart, the tool requires one data series to be numeric, and the other to be categorical. A four-digit year is a number, which can be treated either as numeric or categorical. In most cases, and by default, numbers are considered numeric. To make this chart, the user asked the tool to treat years as categorical.

Junkcharts_northplattedry_datatypes

Many tools treat categories as distinct entities ("nominal"), mapping each category to a distinct color. So they have 11 colors for 11 years, which is surely excessive.

This happens because the year data is not truly categorical. These eleven years were picked based on the amount of rainfall. There isn't a single year with two values, it's not even possible. The years are just irregularly spaced indices. Nevertheless, the tool misbehaves if the year data are regarded as numeric. (It automatically selects a time-series line chart, because someone's data visualization flowchart says so.) Mis-specification in order to trick the tool has consequences.

The designer's intention is to compare the current year 2023 to the driest years in history. This is obvious from the subtitle in which 2023 is isolated and its purple color is foregrounded.

Junkcharts_northplattedry_titles

How unfortunate then that among the 11 colors, this tool grabbed 4 variations of purple! I like to think that the designer wanted to keep 2023 purple, and turn the other bars gray -- but the tool thwarted this effort.

Junkcharts_northplattedry_purples

The tool does other offensive things. By default, it makes a legend for categorical data. I like the placement of the legend right beneath the title, a recognition that on most charts, the reader must look at the legend first to comprehend what's on the chart.

Not so in this case. The legend is entirely redundant. Removing the legend does not affect our cognition one bit. That's because the colors encode nothing.

Worse, the legend sows confusion because it presents the same set of years in chronological order while the bars below are sorted by amount of precipitation: thus, the order of colors in the legend differs from that in the bar chart.

Junkcharts_northplattedry_legend

I can imagine the frustration of the designer who finds out that the tool offers no option to delete the legend. (I don't know this particular tool but I have encountered tools that are rigid in this manner.)

***

Something else went wrong. What's the variable being plotted on the numeric (horizontal) axis?

The answer is inches of rainfall but the answer is actually not found anywhere on the chart. How is it possible that a graphing tool does not indicate the variables being plotted?

I imagine the workflow like this: the tool by default puts an axis label which uses the name of the column that holds the data. That column may have a name that is not reader-friendly, e.g. PRECIP. The designer edits the name to "Rainfall in inches". Being a fan of the Economist graphics style, they move the axis label to the chart title area.

The designer now works the chart title. The title is made to spell out the story, which is that North Platte is experiencing a historically dry year. Instead of mentioning rainfall, the new title emphasizes the lack thereof.

The individual steps of this workflow make a lot of sense. It's great that the title is informative, and tells the story. It's great that the axis label was fixed to describe rainfall in words not database-speak. But the end result is a confusing mess.

The reader must now infer that the values being plotted are inches of rainfall.

Further, the tool also imposes a default sorting of the bars. The bars run from longest to shortest, in this case, the longest bar has the most rainfall. After reading the title, our expectation is to find data on the Top 11 driest years, from the driest of the driest to the least dry of the driest. But what we encounter is the opposite order.

Junkcharts_northplattedry_sorting

Most graphics software behaves like this as they are plotting the ranks of the categories with the driest being rank 1, counting up. Because the vertical axis moves upwards from zero, the top-ranked item ends up at the bottom of the chart.

***

_trifectacheckup_imageMoving now from the V corner to the D corner of the Trifecta checkup (link), I can't end this post without pointing out that the comparisons shown on the chart don't work. It's the first few months of 2023 versus the full years of the others.

The fix is to plot the same number of months for all years. This can be done in two ways: find the partial year data for the historical years, or project the 2023 data for the full year.

(If the rainy season is already over, then the chart will look exactly the same at the end of 2023 as it is now. Then, I'd just add a note to explain this.)

***

Here is a version of the chart after doing away with unhelpful default settings:


Redo_junkcharts_northplattedry


The one thing you're afraid to ask about histograms

In the previous post about a variant of the histogram, I glossed over a few perplexing issues - deliberately. Today's post addresses one of these topics: what is going on in the vertical axis of a histogram?

The real question is: what data are encoded in the histogram, and where?

***

Let's return to the dataset from the last post. I grabbed data from a set of international football (i.e. soccer) matches. Each goal scored has a scoring minute. If the goal is scored in regulation time, the scoring minute is a number between 1 and 90 minutes. Specifically, the data collector applies a rounding up: any goal scored between 0 and 60 seconds is recorded as 1, all the way up to a goal scored between 89 and 90th minute being recorded as 90. In this post, I only consider goals scored in regulation time so the horizontal axis is between 1-90 minutes.

The kneejerk answer to the posed question is: counts in bins. Isn't it the case that in constructing a histogram, we divide the range of values (1-90) into bins, and then plot the counts within bins, i.e. the number of goals scored within each bin of minutes?

The following is what we have in mind:

Junkcharts_counthistogram_1

Let's call this the "count histogram".

Some readers may dislike the scale of the vertical axis, as its interpretation hinges on the total sample size. Hence, another kneejerk answer is: frequencies in bins. Instead of plotting counts directly, plot frequencies, which are just standardized counts. Just divide each value by the sample size. Here's the "frequency histogram":

Junkcharts_freqhistogram_1

The count and frequency histograms are identical except for the scale, and appear intuitively clear. The count and frequency data are encoded in the heights of the columns. The column widths are an afterthought, and they adhere to a fixed constant. Unlike a column chart, typically the gap width in a histogram is zero, as we want to partition the horizontal range into adjoining sections.

Now, if you look carefully at the histogram from the last post, reproduced below, you'd find that it plots neither counts nor frequencies:

Junkcharts_densityhistogram_1

The numbers on the axis are fractions, and suggest that they may be frequencies, but a quick check proves otherwise: with 9 columns, the average column should contain at least 10 percent of the data. The total of the displayed fractions is nowhere near 100%, which is our expectation if the values are relative frequencies. You may have come across this strangeness when creating histograms using R or some other software.

The purpose of this post is to explain what values are being plotted and why.

***

What are the kinds of questions we like to answer about the distribution of data?

At a high level, we want to know "where are my data"?

Arguably these two questions are fundamental:

  • what is the probability that the data falls within a given range of values? e.g., what is the probability that a goal is scored in the first 15 minutes of a football match?
  • what is the relative probability of data between two ranges of values? e.g. are teams more likely to score in last 5 minutes of the first half or the last five minutes of the second half of a football match?

In a histogram, the first question is answered by comparing a given column to the entire set of columns while the second question is answered by comparing one column to another column.

Let's see what we can learn from the count histogram.

Junkcharts_counthistograms_questions

In a count histogram, the heights encode the count data. To address the relative probability question, we note that the ratio of heights is the ratio of counts, and the ratio of counts is the same as the ratio of frequencies. Thus, we learn that teams are roughly 3000/1500 = 1.5 times more likely to score in the last 5 minutes of the second half than during the last 5 minutes of the first half. (See the green columns).

[For those who follow football, it's clear that the data collector treated goals scored during injury time of either half as scored during the last minute of the half, so this dataset can't be used to analyze timing of goals unless the real minutes were recorded for injury-time goals.]

To address the range probability question, we compare the aggregate height of the three orange columns with the total heights of all columns. Note that I said "height", not "area," because the heights directly encode counts. It's actually taxing to figure out the total height!

We resort to reading the total area of all columns. This should yield the correct answer: the area is directly proportional to the height because the column widths are fixed as a constant. Bear in mind, though, if the column widths vary (the theme of the last post), then areas and heights are not interchangable concepts.

Estimating the total area is still not easy, especially if the column heights exhibit high variance. What we need is the proportion of the total area that is orange. It's possible to see, not easy.

You may interject now to point out that the total area should equal the aggregate count (sample size). But that is a fallacy! It's very easy to make this error. The aggregate count is actually the total height, and because of that, the total area is the aggregate count multiplied by the column width! In my example, the total height is 23,682, which is the number of goals in the dataset, while the total area is 23,682 times 5 minutes.

[For those who think in equations, the total area is the sum over all columns of height(i) x width(i). When width is constant, we can take it outside the sum, and the sum of height(i) is just the total count.]

***

The count histogram is hard to use because it requires knowing the sample size. It's the first thing that is produced because the raw data are counts in bins. The frequency histogram is better at delivering answers.

In the frequency histogram, the heights encode frequency data. We can therefore just read off the relative probability of the orange column, bypassing the need to compute the total area.

This workaround actually promotes the fallacy described above for the count histogram. It is easy to fall into the trap of thinking that the total area of all columns is 100%. It isn't.

Similar to before, the total height should be the total frequency but the total area is the total frequency multipled by the column width, that is to say, the total area is the reciprocal of the bin width. In the football example, using 5-minute intervals, the total area of the frequency histogram is 1/(5 minutes) in the case of equal bin widths.

How about the relative probability question? On the frequency histogram, the ratio of column heights is the ratio of frequencies, which is exactly what we want. So long as the column width is constant, comparing column heights is easy.

***

One theme in the above discussion is that in the count and frequency histograms, the count and frequency data are encoded in the column heights but not the column areas. This is a source of major confusion. Because of the convention of using equal column widths, one treats areas and heights as interchangable... but not always. The total column area isn't the same as the total column height.

This observation has some unsettling implications.

As shown above, the total area is affected by the column width. The column width in an equal-width histogram is the range of the x-values divided by the number of bins. Thus, the total area is a function of the number of bins.

Consider the following frequency histograms of the same scoring minutes dataset. The only difference is the number of bins used.

Junkcharts_freqhistogram_differentbins

Increasing the number of bins has a series of effects:

  • the columns become narrower
  • the columns become shorter, because each narrower bin can contain at most the same count as the wider bin that contains it.
  • the total area of the columns become smaller.

This last one is unexpected and completely messes up our intuition. When we increase the number of bins, not only are the columns shortening but the total area covered by all the columns is also shrinking. Remember that the total area whether it is a count or frequency histogram has a factor equal to the bin width. Higher number of bins means smaller bin width, which means smaller total area.

***

What if we force the total area to be constant regardless of how many bins we use? This setting seems more intuitive: in the 5-bin histogram, we partition the total area into five parts while in the 10-bin histogram, we divide it into 10 parts.

This is the principle used by R and the other statistical software when they produce so-called density histograms. The count and frequency data are encoded in the column areas - by implication, the same data could not have been encoded simultaneously in the column heights!

The way to accomplish this is to divide by the bin width. If you look at the total area formulas above, for the count histogram, total area is total count x bin width. If the height is count divided by bin width, then the total area is the total count. Similarly, if the height in the frequency histogram is frequency divided by bin width, then the total area is 100%.

Count divided by some section of the x-range is otherwise known as "density". It captures the concept of how tightly the data are packed inside a particular section of the dataset. Thus, in a count-density histogram, the heights encode densities while the areas encode counts. In this case, total area is the total count. If we want to standardize total area to be 1, then we should compute densities using frequencies rather than counts. Frequency densities are just count densities divided by the total count.

To summarize, in a frequency-density histogram, the heights encode densities, defined as frequency divided by the bin width. This is not very intuitive; just think of densities as how closely packed the data are in the specified bin. The column areas encode frequencies so that the total area is 100%.

The reason why density histograms are confusing is that we are reading off column heights while thinking that the total area should add up to 100%. Column heights and column areas cannot both add up to 100%. We have to pick one or the other.

Comparing relative column heights still works when the density histogram has equal bin widths. In this case, the relative height and relative area are the same because relative density equals relative frequencies if the bin width is fixed.

The following charts recap the discussion above. It shows how the frequency histogram does not preserve the total area when bin sizes are changed while the density histogram does.

Junkcharts_freqdensityhistograms_differentbins

***

The density histogram is a major pain for solving range probability questions because the frequencies are encoded in the column areas, not the heights. Areas are not marked out in a graph.

The column height gives us densities which are not probabilities. In order to retrieve probabilities, we have to multiply the density by the bin width, that is to say, we must estimate the area of the column. That requires mapping two dimensions (width, height) onto one (area). It is in fact impossible without measurement - unless we make the bin widths constant.

When we make the bin widths constant, we still can't read densities off the vertical axis, and treat them as probabilities. If I must use the density histogram to answer the question of how likely a team scores in the first 15 minutes, I'd sum the heights of the first 3 columns, which is about 0.025, and then multiply it by the bin width of 5 minutes, which gives 0.125 or 12.5%.

At the end of this exploration, I like the frequency histogram best. The density histogram is useful when we are comparing different histograms, which isn't the most common use case.

***

The histogram is a basic chart in the tool kit. It's more complicated than it seems. I haven't come across any intro dataviz books that explain this clearly.

Most of this post deals with equal-width histograms. If we allow bin widths to vary, it gets even more complicated. Stay tuned.

***

For those using base R graphics, I hope this post helps you interpret what they say in the manual. The default behavior of the "hist" function depends on whether the bins are equal width:

  • if the bin width is constant, then R produces a count histogram. As shown above, in a count histogram, the column heights indicate counts in bins but the total column area does not equal the total sample size, but the total sample size multiplied by the bin width. (Equal width is the default unless the user specifies bin breakpoints.)
  • if the bin width is not constant, then R produces a (frequency-)density histogram. The column heights are densities, defined as frequencies divided by bin width while the column areas are frequencies, with the total area summing to 100%.

Unfortunately, R does not generate a frequency histogram. To make one, you'd have to divide the counts in bins by the sum of counts. (In making some of the graphs above, I tricked it.) You also need to trick it to make a frequency-density histogram with equal-width bins, as it's coded to produce a count histogram when bin size is fixed.

 

P.S. [5-2-2023] As pointed out by a reader, I should clarify that R and I use the word "frequency" differently. Specifically, R uses frequency to mean counts, therefore, what I have been calling the "count histogram", R would have called it a "frequency histogram", and what I have been describing as a "frequency histogram", the "hist" function simply does not generate it unless you trick it to do so. I'm using "frequency" in the everyday sense of the word, such as "the frequency of the bus". In many statistical packages, frequency is used to mean "count", as in the frequency table which is just a table of counts. The reader suggested proportion which I like, or something like weight.

 

 

 

 

 


Deconstructing graphics as an analysis tool in dataviz

One of the useful exercises I like to do with charts is to "deconstruct" them. (This amounts to a deeper version of the self-sufficiency test.)

Here is a chart stripped down to just the main visual elements.

Junkcharts_cbcrevenues_deconstructed1

The game is to guess what is the structure of the data given these visual elements.

I guessed the following:

  • The data has a top-level split into two groups
  • Within each group, the data is further split into 3 parts, corresponding to the 3 columns
  • With each part, there are a variable number of subparts, each of which is given a unique color
  • The color legend suggests that each group's data are split into 7 subparts, so I'm guessing that the 7 subparts are aggregated into 3 parts
  • The core chart form is a stacked column chart with absolute values so relative proportions within each column (part) is important
  • Comparing across columns is not supported because each column has its own total value
  • Comparing same-color blocks across the two groups is meaningful. It's easier to compare their absolute values but harder to compare the relative values (proportions of total)

If I knew that the two groups are time periods, I'd also guess that the group on the left is the earlier time period, and the one on the right is the later time period. In addition to the usual left-to-right convention for time series, the columns are getting taller going left to right. Many things (not all, obviously) grow over time.

The color choice is a bit confusing because if the subparts are what I think they are, then it makes more sense to use one color and different shades within each column.

***

The above guesses are a mixed bag. What one learns from the exercise is what cues readers are receiving from the visual structure.

Here is the same chart with key contextual information added back:

Junkcharts_cbcrevenues_deconstructed2

Now I see that the chart concerns revenues of a business over two years.

My guess on the direction of time was wrong. The more recent year is placed on the left, counter to convention. This entity therefore suffered a loss of revenues from 2017-8 to 2018-9.

The entity receives substantial government funding. In 2017-8, it has 1 dollar of government funds for every 2 dollars of revenues. In 2018-9, it's roughly 2 dollars of government funds per every 3 dollars of revenues. Thus, the ratio of government funding to revenues has increased.

On closer inspection, the 7 colors do not represent 7 components of this entity's funding. The categories listed in the color legend overlap.

It's rather confusing but I missed one very important feature of the chart in my first assessment: the three columns within each year group are nested. The second column breaks down revenues into 3 parts while the third column subdivides advertising revenues into two parts.

What we've found is that this design does not offer any visual cues to help readers understand how the three columns within a year-group relates to each other. Adding guiding lines or changing the color scheme helps.

***

Next, I add back the data labels:

Cbc_revenues_original

The system of labeling can be described as: label everything that is not further broken down into parts on the chart.

Because of the nested structure, this means two of the column segments, which are the sums of subparts, are not labeled. This creates a very strange appearance: usually, the largest parts are split into subparts, so such a labeling system means the largest parts/subparts are not labeled while the smaller, less influential, subparts are labeled!

You may notice another oddity. The pink segment is well above $1 billion but it is roughly the size of the third column, which represents $250 million. Thus, these columns are not drawn to scale. What happened? Keep reading.

***

Here is the whole chart:

Cbc_revenues_original

A twitter follower sent me this chart. Elon Musk has been feuding with the Canadian broadcaster CBC.

Notice the scale of the vertical axis. It has a discontinuity between $700 million and $1.7 billion. In other words, the two pink sections are artificially shortened. The erased section contains $1 billion (!) Notice that the erased section is larger than the visible section.

The focus of Musk's feud with CBC is on what proportion of the company's funds come from the government. On this chart, the only way to figure that out is to copy out the data and divide. It's roughly 1.2/1.7 = 70% approx.

***

The exercise of deconstructing graphics helps us understand what parts are doing what, and it also reveals what cues certain parts send to readers.

In better dataviz, every part of the chart is doing something useful, it's free of redundant parts that take up processing time for no reason, and the cues to readers move them towards the intended message, not away from it.

***

A couple of additional comments:

I'm not sure why old data was cited because in the most recent accounting report, the proportion of government funding was around 65%.

Source of funding is not a useful measure of pro- or anti-government bias, especially in a democracy where different parties lead the government at different times. There are plenty of mouthpiece media that do not apparently receive government funding.


Showing both absolute and relative values on the same chart 2

In the previous post, I looked at Visual Capitalist's visualization of the amount of uninsured deposits at U.S. banks. Using a stacked bar chart, I placed both absolute and relative values on the same chart.

In making that chart, I made these three tradeoffs.

First, I elevated absolute values (dollar amounts) over relative values (proportions). The original designer decided the opposite.

Second, I elevated the TBTF banks over the smaller banks. The original designer also decided the opposite.

Third, I elevated the total value over the disaggregated values (insured, uninsured). The original designer only visualized the uninsured values in the bars.

Which chart is better depends on what story one wants to tell.

***
For today's post, I'm showing another sketch of the same data, with the same goal of putting both absolute and relative values on the same chart.

Redo_visualcapitalist_uninsureddeposits_2b

The starting point of this sketch is the original chart - the stacked bar chart showing relative proportions. I added the insured portion so that it is on almost equal footing as the uninsured portion of the deposits. This edit is crucial to convey the impression of proportions.

My story hasn't changed; I still want to elevate the TBTF banks.

For this version, I try a different way of elevating TBTF banks. The key step is to encode data into the heights of the bars. I use these bar heights to convey the relative importance of banks, as reflected by total deposits.

The areas of the red blocks represent the uninsured amounts. That said, it's not easy to compare rectangular areas when both dimensions are different.

Comparing the total red area with the total yellow area, we learn that the majority of deposits in these banks are uninsured(!)

 


Showing both absolute and relative values on the same chart 1

Visual Capitalist has a helpful overview on the "uninsured" deposits problem that has become the talking point of the recent banking crisis. Here is a snippet of the chart that you can see in full at this link:

Visualcapitalist_uninsureddeposits_top

This is in infographics style. It's a bar chart that shows the top X banks. Even though the headline says "by uninsured deposits", the sort order is really based on the proportion of deposits that are uninsured, i.e. residing in accounts that exceed $250K.  They used a red color to highlight the two failed banks, both of which have at least 90% of deposits uninsured.

The right column provides further context: the total amounts of deposits, presented both as a list of numbers as well as a column of bubbles. As readers know, bubbles are not self-sufficient, and if the list of numbers were removed, the bubbles lost most of their power of communication. Big, small, but how much smaller?

There are little nuggets of text in various corners that provide other information.

Overall, this is a pretty good one as far as infographics go.

***

I'd prefer to elevate information about the Too Big to Fail banks (which are hiding in plain sight). Addressing this surfaces the usual battle between relative and absolute values. While the smaller banks have some of the highest concentrations of uninsured deposits, each TBTF bank has multiples of the absolute dollars of uninsured deposits as the smaller banks.

Here is a revised version:

Redo_visualcapitalist_uninsuredassets_1

The banks are still ordered in the same way by the proportions of uninsured value. The data being plotted are not the proportions but the actual deposit amounts. Thus, the three TBTF banks (Citibank, Chase and Bank of America) stand out of the crowd. Aside from Citibank, the other two have relatively moderate proportions of uninsured assets but the sizes of the red bars for any of these three dominate those of the smaller banks.

Notice that I added the gray segments, which portray the amount of deposits that are FDIC protected. I did this not just to show the relative sizes of the banks. Having the other part of the deposits allow readers to answer additional questions, such as which banks have the most insured deposits? They also visually present the relative proportions.

***

The most amazing part of this dataset is the amount of uninsured money. I'm trying to think who these account holders are. It would seem like a very small collection of people and/or businesses would be holding these accounts. If they are mostly businesses, is FDIC insurance designed to protect business deposits? If they are mostly personal accounts, then surely only very wealthy individuals hold most of these accounts.

In the above chart, I'm assuming that deposits and assets are referring to the same thing. This may not be the correct interpretation. Deposits may be only a portion of the assets. It would be strange though that the analysts only have the proportions but not the actual deposit amounts at these banks. Nevertheless, until proven otherwise, you should see my revision as a sketch - what you can do if you have both the total deposits and the proportions uninsured.


Some chart designs bring out more information than others

I forgot where I found this chart but here it is:

Nbwa_beer_index

The designer realizes the flaw of the design, which is why the number 50 is placed in a red box, and there is another big red box  placed right in our faces telling us that any number above 50 represents growing, while all below 50 shrinking.

The real culprit is the column chart design, which treats zero as the baseline, not 50. Thus, the real solution is to move away from a column chart design.

There are many possibilities. Here's one using the Bumps chart form:

Junkcharts_redo_nbwabeerpurchasersindex

There are several interesting insights buried in that column chart!

First we learn that almost all segments were contracting in both years.

Next, there are some clustering of segments. The Premium Regular and Cider segments were moving in sync. Craft, FMB/SEltzer and Below Premium were similar in 2022; intriguingly, Below Premium diverged from the other two segments.

In fact, Below Premium has distinguished itself as the only segment that experienced an improved index relative to 2022!

 

 

 

 


Thoughts on Daniel's fix for dual-axes charts

I've taken a little time to ponder Daniel Z's proposed "fix" for dual-axes charts (link). The example he used is this:

Danielzvinca_dualaxes_linecolumn

In that long post, Daniel explained why he preferred to mix a line with columns, rather than using the more common dual lines construction: to prevent readers from falsely attributing meaning to crisscrossing lines. There are many issues with dual-axes charts, which I won't repeat in this post; one of their most dissatisfying features is the lack of connection between the two vertical scales, and thus, it's pretty easy to manufacture an image of correlation when it doesn't exist. As shown in this old post, one can expand or restrict one of the vertical axes and shift the line up and down to "match" the other vertical axis.

Daniel's proposed fix retains the dual axes, and he even restores the dual lines construction.

Danielzvinca_dualaxes_estimatedy

How is this chart different from the typical dual-axes chart, like the first graph in this post?

Recall that the problem with using two axes is that the designer could squeeze, expand or shift one of the axes in any number of ways to manufacture many realities. What Daniel effectively did here is selecting one specific way to transform the "New Customers" axis (shown in gray).

His idea is to run a simple linear regression between the two time series. Think of fitting a "trendline" in Excel between Revenues and New Customers. Then, use the resulting regression equation to compute an "estimated" revenues based on the New Customers series. The coefficients of this regression equation then determines the degree of squeezing/expansion and shifting applied to the New Customers axis.

The main advantage of this "fix" is to eliminate the freedom to manufacture multiple realities. There is exactly one way to transform the New Customers axis.

The chart itself takes a bit of time to get used to. The actual values plotted in the gray line are "estimated revenues" from the regression model, thus the blue axis values on the left apply to the gray line as well. The gray axis shows the respective customer values. Because we performed a linear fit, each value of estimated revenues correspond to a particular customer value. The gray line is thus a squeezed/expanded/shifted replica of the New Customers line (shown in orange in the first graph). The gray line can then be interpreted on two connected scales, and both the blue and gray labels are relevant.

***

What are we staring at?

The blue line shows the observed revenues while the gray line displays the estimated revenues (predicted by the regression line). Thus, the vertical gaps between the two lines are the "residuals" of the regression model, i.e. the estimation errors. If you have studied Statistics 101, you may remember that the residuals are the components that make up the R-squared, which measures the quality of fit of the regression model. R-squared is the square of r, which stands for the correlation between Customers and the observed revenues. Thus the higher the (linear) correlation between the two time series, the higher the R-squared, the better the regression fit, the smaller the gaps between the two lines.

***

There is some value to this chart, although it'd be challenging to explain to someone who has not taken Statistics 101.

While I like that this linear regression approach is "principled", I wonder why this transformation should be preferred to all others. I don't have an answer to this question yet.

***

Daniel's fix reminds me of a different, but very common, chart.

Forecastvsactualinflationchart

This chart shows actual vs forecasted inflation rates. This chart has two lines but only needs one axis since both lines represent inflation rates in the same range.

We can think of the "estimated revenues" line above as forecasted or expected revenues, based on the actual number of new customers. In particular, this forecast is based on a specific model: one that assumes that revenues is linearly related to the number of new customers. The "residuals" are forecasting errors.

In this sense, I think Daniel's solution amounts to rephrasing the question of the chart from "how closely are revenues and new customers correlated?" to "given the trend in new customers, are we over- or under-performing on revenues?"

Instead of using the dual-axes chart with two different scales, I'd prefer to answer the question by showing this expected vs actual revenues chart with one scale.

This does not eliminate the question about the "principle" behind the estimated revenues, but it makes clear that the challenge is to justify why revenues is a linear function of new customers, and no other variables.

Unlike the dual-axes chart, the actual vs forecasted chart is independent of the forecasting method. One can produce forecasted revenues based on a complicated function of new customers, existing customers, and any other factors. A different model just changes the shape of the forecasted revenues line. We still have two comparable lines on one scale.

 

 

 

 

 


All about Connecticut

This dataviz project by CT Mirror is excellent. The project walks through key statistics of the state of Connecticut.

Here are a few charts I enjoyed.

The first one shows the industries employing the most CT residents. The left and right arrows are perfect, much better than the usual dot plots.

Ctmirror_growingindustries

The industries are sorted by decreasing size from top to bottom, based on employment in 2019. The chosen scale is absolute, showing the number of employees. The relative change is shown next to the arrow heads in percentages.

The inclusion of both absolute and relative scales may be a source of confusion as the lengths of the arrows encode the absolute differences, not the relative differences indicated by the data labels. This type of decision is always difficult for the designer. Selecting one of the two scales may improve clarity but induce loss aversion.

***

The next example is a bumps chart showing the growth in residents with at least a bachelor's degree.

Ctmirror_highered

This is more like a slopegraph as it appears to draw straight lines between two time points 9 years apart, omitting the intervening years. Each line represents a state. Connecticut's line is shown in red. The message is clear. Connecticut is among the most highly educated out of the 50 states. It maintained this advantage throughout the period.

I'd prefer to use solid lines for the background states, and the axis labels can be sparser.

It's a little odd that pretty much every line has the same slope. I'm suspecting that the numbers came out of a regression model, with varying slopes by state, but the inter-state variance is low.

In the online presentation, one can click on each line to see the values.

***

The final example is a two-sided bar chart:

Ctmirror_migration

This shows migration in and out of the state. The red bars represent the number of people who moved out, while the green bars represent those who moved into the state. The states are arranged from the most number of in-migrants to the least.

I have clipped the bottom of the chart as it extends to 50 states, and the bottom half is barely visible since the absolute numbers are so small.

I'd suggest showing the top 10 states. Then group the rest of the states by region, and plot them as regions. This change makes the chart more compact, as well as more useful.

***

There are many other charts, and I encourage you to visit and support this data journalism.

 

 

 


Visual cues affect how data are perceived

Here's a recent NYT graphic showing California's water situation at different time scales (link to article).

Nyt_california_drought

It's a small multiples display, showing the spatial distribution of the precipitation amounts in California. The two panels show, respectively, the short-term view (past month) and the longer-term view (3 years). Precipitation is measured in relative terms,  so what is plotted is the relative ratio of precipitation in the reference period, with 100 being the 30-year average.

Green is much wetter than average while brown is much drier than average.

The key to making this chart work is a common color scheme across the two panels.

Also, the placement of major cities provides anchor points for our eyes to move back and forth between the two panels.

***

The NYT graphic is technically well executed. I'm a bit unhappy with the headline: "Recent rains haven't erased California's long-term drought".

At the surface, the conclusion seems sensible. Look, there is a lot of green, even deep green, on the left panel, which means the state got lots more rain than usual in the past month. Now, on the right panel, we find patches of brown, and very little green.

But pay attention to the scale. The light brown color, which covers the largest area, has value 70 to 90, thus, these regions have gotten 10-30% less precipitation than average in the past three years relative to the 30-year average.

Here's the question: what does it mean by "erasing California's long-term drought"? Does the 3-year average have to equal or exceed the 30-year average? Why should that be the case?

If we took all 3-year windows within those 30 years, we're definitely not going to find that each such 3-year average falls at or above the 30-year average. To illustrate this, I pulled annual rainfall data for San Francisco. Here is a histogram of 3-year averages for the 30-year period 1991-2020.

Redo_nyt_californiadrought_sfrainfall

For example, the first value is the average rainfall for years 1989, 1990 and 1991, the next value is the average of 1990, 1991, and 1992, and so on. Each value is a relative value relative to the overall average in the 30-year window. There are two more values beyond 2020 that is not shown in the histogram. These are 57%, and 61%, so against the 30-year average, those two 3-year averages were drier than usual.

The above shows the underlying variability of the 3-year averages inside the reference time window. We have to first define "normal", and that might be a value between 70% and 130%.

In the same way, we can establish the "normal" range for the entire state of California. If it's also 70% to 130%, then the last 3 years as shown in the map above should be considered normal.