Elevator shoes for column charts

Continuing my review of some charts spammed to me, I wasn’t expecting to find any interest in the following:

Masterworks_chart4

It’s a column chart showing the number of years of data available for different asset classes. The color has little value other than to subtly draw the reader’s attention to the bar called “Art,” which is the focus of the marketing copy.

Do the column heights encode the data?

The answer is no.

***

Let’s take a little journey. First I notice there is a grid behind the column chart, hanging above the baseline.

Redo_masterworks4_grid
I marked out two columns with values 50 and 25, so the second column should be exactly half the height of the first. Each column consists of two parts, the first overlapping the grid while the second connecting the bottom of the grid to the baseline. The second part is a constant for every column; I label this distance Y.  

Against the grid, the column “50” spans 9 cells while the column “25” spans 4 cells. I label the grid height X. Now, if the first column is twice the height of the second, the equation: 9X + Y = 2*(4X+Y) should hold.

The only solution to this equation is X = Y. In other words, the distance between the bottom of the grid to the baseline must be exactly the height of one grid cell if the column heights were to faithfully represent the data. Well – it’s obvious that the former is larger than the latter.

In the revision, I have chopped off the excess height by moving the baseline upwards.

Redo_masterworks4_corrected

That’s the mechanics. Now, figuring out the motivation is another matter.


Chartjunk as marketing copy

I got some spam marketing message last week. How exciting. They even use a subject line that has absolutely nothing to do with its content, baiting me to open it. And open I did, to some data graphics horrors.

The marketer promises a whole series of charts to prove that art is a great asset class for investment returns.

The very first chart already caught my full attention. It's this one:

Masterworks_chart1

It's a simple bar chart, with four values. Looks innocuous.

I'm unable to appreciate the recent trend to align bars in the middle, rather than at their bases. So I converted it to the canonical form:

Redo_masterworks_1_barchart

Do you see the problem?

The second value ($1.7 trillion) is exactly half the size of the first value ($3.4 trillion) and yet the second bar is two-thirds of the length of the first bar. So, the size of the second bar is exaggerated relative to its label – and that’s the bar displaying the market size for “art,” which is what the spammer is pitching.

The bottom pair of values share the same relationship: $0.8 trillion is exactly half of $1.6 trillion. Again, the relative lengths of those two bars are not 50% but slightly over 60%.

Redo_masterworks_1_barchart_excess

Did the designer think that the bar lengths could be customized to whatever s/he desires? This one is hard to crack.

***

The sixth chart in the series is a different kind of puzzle:

Masterworks_chart6

All three lines have the exact same labels but show different values over time.

***

And they have pie charts, of course. Take a look:

Masterworks_chart

Something went wrong here too. I'll leave it to my readers who can certainly figure it out :)

***

These charts were probably spammed to at least thousands.

 


Two metrics in-fighting

The Wall Street Journal shows the following chart which pits two metrics against each other:

Wsj_salaries25to29

The primary metric is the change in median yearly salary between the two periods of time. We presume it's primary because of its presence in the chart title, and the blue bars being more readable than the green bubbles. The secondary metric is the median yearly salary in the later period.

That, I believe, was the intended design. When I saw this chart, my eyes went to the numbers inside the green bubbles. Perhaps it's because I didn't read the chart title first, and the horizontal axis wasn't labelled so it wasn't obvious what the blue bars coded.

As with most bubble charts, the data labels exist to cover up the inadequacy of circular areas. The self-sufficiency test - removing the data labels - shows this well:

Redo_wsj_salaries25to29

It's simply impossible to know what values should be in each bubble, or to perceive the relative sizes of those bubbles.

***

Reversing the order of the blue bars also helps:

Redo_wsjsalaries25to29_2

The original order is one of the more annoying features in most visualization packages. Because internally, the categories are numbered 1, 2, 3, ..., and because the convention is to have values run higher as they run up the vertical axis, these packages would place the top-ranked item at the bottom of the chart.

Most people read top to bottom, which means that they read the least important item first, and the most important item last!

In most visualization packages, it takes only 1 click or 1 action to reverse the order of the items. Please do it!

***

For change over time, I like using a Bumps chart, otherwise called a slope graph:

Redo_wsjsalaries25to29_3


An elaborate data vessel

Visualcapitalist_globaloilproductionI recently came across the following dataviz showing global oil production (link).

This is an ambitious graphic that addresses several questions of composition.

The raw data show the amount of production by country adding up to the global total. The countries are then grouped by region. Further, the graph presents an oil-and-gas specific grouping, as indicated by the legend shown just below the chart title. This grouping is indicated by the color of the circumference of the circle containing the flag of the country.

This chart form is popular in modern online graphics programs. It is like an elaborate data vessel. Because the countries are lined up around the barrel, a space has been created on three sides to admit labels and text annotations. This is a strength of this chart form.

***

The chart conveys little information about the underlying data. Each country is given a unique odd shaped polygon, making it impossible to compare sizes. It’s definitely possible to pick out U.S., Russia, Saudi Arabia as the top producers. But in presenting the ranks of the data, this chart form pales in comparison to a straightforward data table, or a bar chart. The less said about presenting values, the better.

Indeed, our self-sufficiency test exposes the inability of these polygons to convey the data. This is precisely why almost all values of the dataset are present on the chart.

***

The dataviz subtly presumes some knowledge on the part of the readers.

The regions are not directly labeled. The readers must know that Saudi Arabia is in the Middle East, U.S. is part of North America, etc. Admittedly this is not a big ask, but it is an ask.

It is also assumed that readers know their flags, especially those of smaller countries. Some of the small polygons have no space left for country names and they are labeled with just flags.

Visualcapitalist_globaloilproduction_nocountrylabels

In addition, knowing country acronyms is required for smaller countries as well. For example, in Africa, we find AGO, COG and GAB.

Visualcapitalist_globaloilproduction_countryacronyms

For this chart form the designer treats each country according to the space it has on the chart (except those countries that found themselves on the edges of the barrel). Font sizes, icons, labels, acronyms, data labels, etc. vary.

The readers are assumed to know the significance of OPEC and OPEC+. This grouping is given second fiddle, and can be found via the color of the circumference of the flag icons.

Visualcapitalist_globaloilproduction_opeclegend

I’d have not assigned a color to the non-OPEC countries, and just use the yellow and blue for OPEC and OPEC+. This is a little edit but makes the search for the edges more efficient.

Visualcapitalist_globaloilproduction_twoopeclabels

***

Let’s now return to the perception of composition.

In exactly the same manner as individual countries, the larger regions are represented by polygons that have arbitrary shapes. One can strain to compile the rank order of regions but it’s impossible to compare the relative values of production across regions. Perhaps this explains the presence of another chart at the bottom that addresses this regional comparison.

The situation is worse for the OPEC/OPEC+ grouping. Now, the readers must find all flag icons with edges of a specific color, then mentally piece together these arbitrarily shaped polygons, then realizing that they won’t fit together nicely, and so must now mentally morph the shapes in an area-preserving manner, in order to complete this puzzle.

This is why I said earlier this is an elaborate data vessel. It’s nice to look at but it doesn’t convey information about composition as readers might expect it to.

Visualcapitalist_globaloilproduction_excerpt


Losing the plot while stacking up the bars

I came across this chart from an infographics that claims to show which zip codes in the U.S. are the "dirtiest" (link). I won't go into the data analysis in this post - it's the usual "open data" style analysis that takes whatever data they could find (in this case, 311 calls) and make some hay out of it.

03_Dirtiest-Zip-Codes-in-New-York

It's amazing how such analyses frequently land on the Top N, Bottom N table. Top/Bottom N is euphemistically called "insights". But "insights" should answer at least one of these following questions: Where are these zip codes? What's the reason why 11216 has the highest rate of complaints while 11040 has the lowest? What measures can be taken to make the city cleaner?

***

The basic form chosen for this graphic is the bar chart. The data concerns the number of complaints per 100,000 people (about sanitation - they didn't disclose how they classified a complaint as about sanitation).

To mitigate the "boredom" of bar charts, the designer made the edges of the bars swiggly, and added icons of items found in trash inside the bars. These are thankfully not too intrusive.

Why are all the data printed on the chart? Try mentally wiping the data labels, and you'll understand why the designer did it.

If readers look at data labels rather than the bars, then the data visualization surely has failed. I'd prefer to use an axis

If you spend a few more minutes on the chart, you may notice the gray parts. This is not the simple bar chart but a stacked bar chart. In effect, every bar is referenced to the first bar, which shows the maximum number of complaints per 100K people. For example, zip code 10474 has about 90% of the complaints experienced in zip code 11216, the "dirtiest" place in New York.

***

The infographic then moves on to Los Angeles, and repeats the Top N/Bottom N presentation:

04_Dirtiest-Zip-Codes-in-Los-Angeles

With this, the plot is lost.

For an inexplicable reason, the dirtiest zip code in LA does not occupy the entire length of the bar. The worst zip code here fills out 87% of the bar length, implying that the entire bar represents the value of 34,978 complaints per 100K people. How did the designer decide on this number?

As a result, every other value is referenced to 34,978 and not to the rate of complaints in the dirtiest zip code!

***

The infographic eventually covers Houston. Here are the dirtiest two zip codes in Houston:

Housefresh_houston_dirtiest2

How does one interpret the orange section of the second bar? The original intention is for us to see that this zip code is about 80% as dirty as the dirtiest zip code. However, the full length of the bar does not here represent the dirtiest zip code.

***

We also got a hint as to why this entire analysis is problematic. The values in LA are way bigger than those in NY, about 4 times higher at the top of the table. Is LA really that much dirtier than NY? Or perhaps the data have not been properly aligned between cities?

 

P.S. [8-26-2023] Added link to the infographic.

 


Partition of Europe

A long-time reader sent me the following map via twitter:

Europeelects_map

This map tells how the major political groups divide up the European Parliament. I’ll spare you the counting. There are 27 countries, and nine political groups (including the "unaffiliated").

The key chart type is a box of dots. Each country gets its own box. Each box has its own width. What determines the width? If you ask me, it’s the relative span of the countries on the map. For example, the narrow countries like Ireland and Portugal have three dots across while the wider countries like Spain, Germany and Italy have 7, 10 and 8 dots across respectively.

Each dot represents one seat in the Parliament. Each dot has one of 9 possible colors. Each color shows a political lean e.g. the green dots represent Green parties while the maroon dots display “Left” parties.

The end result is a counting game. If we are interested in counts of seats, we have to literally count each dot. If we are interested in proportion of seats, take your poison: either eyeball it or count each color and count the total.

Who does the underlying map serve? Only readers who know the map of Europe. If you don’t know where Hungary or Latvia is, good luck. The physical constraints of the map work against the small-multiples set up of the data. In a small multiples, you want each chart to be identical, except for the country-specific data. The small-multiples structure requires a panel of equal-sized cells. The map does not offer this feature, as many small countries are cramped into Eastern Europe. Also, Europe has a few tiny states e.g. Luxembourg (population 660K)  and Malta (population 520K). To overcome the map, the designer produces boxes of different sizes, substantially loading up the cognitive burden on readers.

The map also dictates where the boxes are situated. The centroids of each country form the scaffolding, with adjustments required when the charts overlap. This restriction ensures a disorderly appearance. By contrast, the regular panel layout of a small multiples facilitates comparisons.

***

Here is something I sketched using a tile map.

Eu parties print sm

First, I have to create a tile map of European countries. Some parts, e.g. western part, are straightforward. The eastern side becomes very congested.

The tile map encodes location in an imprecise sense. Think about the scaffolding of centroids of countries referred to prior. The tile map imposes an order to the madness - we're shifting these centroids so that they line up in a tidier pattern. What we gain in comparability we concede in location precision.

For the EU tile map, I decided to show the Baltic countries in a row rather than a column; the latter would have been more faithful to the true geography. Malta is shown next to Italy even though it could have been placed below. Similarly, Cyprus in relation to Greece. I also included several key countries that are not part of the EU for context.

Instead of raw seat counts, I'm showing the proportion of seats within each country claimed by each political group. I think this metric is more useful to readers.

The legend is itself a chart that shows the aggregate statistics for all 27 countries.


When words speak louder than pictures

I've been staring at this chart from the Wall Street Journal (link) about U.S. workers working remotely:

Wsj_remotework_byyear

It's one of those offerings I think on which the designer spent a lot of effort, but ultimately didn't realize that the reader would spend equal if not more effort deciphering.

However, the following paragraph lifted straight from the article says exactly what needs to be said:

Workers overall spent an average of 5 hours and 25 minutes a day working from home in 2022. That is about two hours more than in 2019, the year before Covid-19 sent millions of workers scrambling to set up home oces, and down just 12 minutes from 2021, according to the Labor Department’s American Time Use Survey.

***

Why is the chart so hard to read?

_trifectacheckup_imageIt's mostly because the visual is fighting the message. In the Trifecta Checkup (link), this is represented by a disconnect between the Q(uestion) and the V(isual) corners - note the green arrow between these two corners.

The message concentrates on two comparisons: first, the increase in amount of remote work after the pandemic; and second, the mild decrease in 2022 relative to 2021.

On the chart, the elements that grab my attention are (a) the green and orange columns (b) the shading in the bottom part of those green and orange columns (c) the thick black line that runs across the chart (d) the indication on the left side that tells me one unit is an hour.

None of those visual elements directly addresses the comparisons. The first comparison - before and after the pandemic - is found by how much the green column spikes above the thick black line. Our comprehension is retarded by the decision to forego the typical axis labels in favor of chopping columns into one-hour blocks.

The second comparison - between 2022 and 2021 - is found in the white space above the top of the orange column.

So, in reality, the text labels that say exactly what needs to be said are carrying a lot of weight. A slight edit to the pointers helps connect those descriptions to the visual depiction, like this:

Redo_junkcharts_wsj_remotework

I've essentially flipped the tactics used in the various pointers. For the average level of remote work pre-pandemic, I dispense of any pointers while I'm using double-headed arrows to indicate differences across time.

Nevertheless, this modified chart is still too complex.

***

Here is a version that aligns the visual to the message:

Redo_junkcharts_wsj_remotework_2

It's a bit awkward because the 2 hour 48 minutes calculation is the 2021 number minus the average of 2015-19, skipping the 2020 year.

 


Why some dataviz fail

Maxim Lisnic's recent post should delight my readers (link). Thanks Alek for the tip. Maxim argues that charts "deceive" not merely by using visual tricks but by a variety of other non-visual means.

This is also the reasoning behind my Trifecta Checkup framework which looks at a data visualization project holistically. There are lots of charts that are well designed and constructed but fail for other reasons. So I am in agreement with Maxim.

He analyzed "10,000 Twitter posts with data visualizations about COVID-19", and found that 84% are "misleading" while only 11% of the 84% "violate common design guidelines". I presume he created some kind of computer program to evaluate these 10,000 charts, and he compiled some fixed set of guidelines that are regarded as "common" practice.

***

Let's review Maxim's examples in the context of the Trifecta Checkup.

_trifectacheckup_image

The first chart shows Covid cases in the U.S. in July and August of 2021 (presumably the time when the chart was published) compared to a year ago (prior to the vaccination campaign).

Maxim_section1

Maxim calls this cherry-picking. He's right - and this is a pet peeve of mine, even with all the peer-reviewed scientific research. In my paper on problems with observational studies (link), my coauthors and I call for a new way forward: researchers should put their model calculations up on a website which is updated as new data arrive, so that we can be sure that the conclusions they published apply generally to all periods of time, not just the time window chosen for the publication.

Looking at the pair of line charts, readers can quickly discover its purpose, so it does well on the Q(uestion) corner of the Trifecta. The cherry-picking relates to the link between the Question and the Data, showing that this chart suffers from subpar analysis.

In addition, I find that the chart also misleads visually - the two vertical scales are completely different: the scale on the left chart spans about 60,000 cases while on the right, it's double the amount.

Thus, I'd call this a Type DV chart, offering opportunities to improve in two of the three corners.

***

The second chart cited by Maxim plots a time series of all-cause mortality rates (per 100,000 people) from 1999 to 2020 as columns.

The designer does a good job drawing our attention to one part of the data - that the average increase in all-cause mortality rate in 2020 over the previous five years was 15%. I also like the use of a different color for the pandemic year.

Then, the designer lost the plot. Instead of drawing a conclusion based on the highlighted part of the data, s/he pushed a story that the 2020 rate was about the same as the 2003 rate. If that was the main message, then instead of computing a 15% increase relative to the past five years, s/he should have shown how the 2003 and 2020 levels are the same!

On a closer look, there is a dashed teal line on the chart but the red line and text completely dominate our attention.

This chart is also Type DV. The intention of the designer is clear: the question is to put the jump in all-cause mortality rate in a historical context. The problem lies again with subpar analysis. In fact, if we take the two insights from the data, they both show how serious a problem Covid was at the time.

When the rate returned to the level of 2003, we have effectively gave up all the gains made over 17 years in a few months.

Besides, a jump in 15% from year to year is highly significant if we look at all other year-to-year changes shown on the chart.

***

The next section concerns a common misuse of charts to suggest causality when the data could only indicate correlation (and where the causal interpretation appears to be dubious). I may write a separate post about this vast topic in the future. Today, I just want to point out that this problem is acute with any Covid-19 research, including official ones.

***

I find the fourth section of Maxim's post to be less convincing. In the following example, the tweet includes two charts, one showing proportion of people vaccinated, and the other showing the case rate, in Iceland and Nigeria.

Maxim_section4

This data visualization is poor even on the V(isual) corner. The first chart includes lots of countries that are irrelevant to the comparison. It includes the unnecessary detail of fully versus partially vaccinated, unnecessary because the two countries selected are at two ends of the scale. The color coding is off sync between the two charts.

Maxim's critique is:

The user fails to account, however, for the fact that Iceland had a much higher testing rate—roughly 200 times as high at the time of posting—making it unreasonable to compare the two countries.

And the section is titled "Issues with Data Validity". It's really not that simple.

First, while the differential testing rate is one factor that should be considered, this factor alone does not account for the entire gap. Second, this issue alone does not disqualify the data. Third, if testing rate differences should be used to invalidate this set of data, then all of the analyses put out by official sources lauding the success of vaccination should also be thrown out since there are vast differences in testing rates across all countries (and also across different time periods for the same country).

One typical workaround for differential testing rate is to look at deaths rather than cases. For the period of time plotted on the case curve, Nigeria's cumulative death per million is about 1/8th that of Iceland. The real problem is again in the Data analysis, and it is about how to interpret this data casually.

This example is yet another Type DV chart. I'd classify it under problems with "Casual Inference". "Data Validity" is definitely a real concern; I just don't find this example convincing.

***

The next section, titled "Failure to account for statistical nuance," is a strange one. The example is a chart that the CDC puts out showing the emergence of cases in a specific county, with cases classified by vaccination status. The chart shows that the vast majority of cases were found in people who were fully vaccinated. The person who tweeted concluded that vaccinated people are the "superspreaders". Maxim's objection to this interpretation is that most cases are in the fully vaccinated because most people are fully vaccinated.

I don't think it's right to criticize the original tweeter in this case. If by superspreader, we mean people who are infected and out there spreading the virus to others through contacts, then what the data say is exactly that most such people are fully vaccinated. In fact, one should be very surprised if the opposite were true.

Indeed, this insight has major public health implications. If the vaccine is indeed 90% effective at stopping cases, we should not be seeing that level of cases. And if the vaccine is only moderately effective, then we may not be able to achieve "herd immunity" status, as was the plan originally.

I'd be reluctant to complain about this specific data visualization. It seems that the data allow different interpretations - some of which are contradictory but all of which are needed to draw a measured conclusion.

***
The last section on "misrepresentation of scientific results" could use a better example. I certainly agree with the message: that people have confirmation bias. I have been calling this "story-first thinking": people with a set story visualize only the data that support their preconception.

However, the example given is not that. The example shows a tweet that contains a chart from a scientific paper that apparently concludes that hydroxychloroquine helps treat Covid-19. Maxim adds this study was subsequently retracted. If the tweet was sent prior to the retraction, then I don't think we can grumble about someone citing a peer reviewed study published in Lancet.

***

Overall, I like Maxim's message. In some cases, I think there are better examples.

 

 


More on equal-area histograms

Today, I'm returning to those "equal-area histograms" that Andrew wrote about last month. I have two previous posts about this. The first post introduces the concept: in a traditional histogram, the columns have the same bin width while the column heights can represent a variety of metrics, such as counts, relative frequencies (i.e. proportion of the data) and densities; in the equal-area histogram, the columns have varying widths while the area of each column is constant, and determined by the number of bins (columns).

HJunkcharts_histogram_percentogram_priorpostere is a comparison of the two types of histograms.

In a second post, I explained the differences between using counts, frequencies and densities in the vertical axis. The underlying issue is that the histogram is not merely a column chart, in which the width of the columns is arbitrary and data-free - in the histogram, both the heights and widths of columns carry meaning. One feature of the histogram that almost everyone expects is that the area of the columns sum up to 1. This aligns with a desired interpretation of probabilities of data falling into specified ranges, as we'd like the amount of data in the entire range to add up to 100%. Unfortunately, the two items are usually incompatible with each other.

If the height of the columns represents the probability of data falling into the range as indicated by its width, then the sum of the column heights is 1, which implies that the sum of the column areas cannot be 1. On the other hand, if the column areas add up to 1, then the column heights will not add up to 1, and thus, in this scenario, we cannot interpret the column heights to be probabilities. As explained in the second post, the column heights in this situation are densities, which can be defined as the proportion of data divided by the bin width. Intuitively, it gives information on how dense or sparse the data are within the specified range.

***

Today's post start with a toy dataset, containing randomly generated values from a normal distribution (bell curve) centered at 4 and with standard deviation 1.

Here is the traditional histogram of the dataset, using 100 equal-width bin. (I generated 10,000 values)

Histogram_normals

Four_precentograms_normalsNext, I created a panel of four equal-area histograms, with increasingly number of bins. Each is built from the same underlying dataset.

The first histogram divides the data into 4 bins; then 10 bins, 20 bins and 100 bins.

In the 4-bin case, each column contains 1/4 = 25% of the data. The middle two columns contain 50% of the data, and they have high densities, as the widths of these columns are low. It's a crude approximation of the familiar bell curve.

As we increase the number of bins, the columns in the middle of the distribution, where most of the data are concentrated, become narrower. In the sparse regions, the column width doesn't necessarily grow because each column must contain 1/n of the data, where n is the number of columns. As the number of columns increases, each column contains less of the data.

The bottom chart is the "percentogram", which is what Andrew's correspondent proposed. The number of bins is set to 100, so each column contains exactly 1 percent of the data. For a normal distribution, the columns in the middle are very tall and thin.

The reason why the middle of the percentogram looks faded is that I asked for a white border around each column. But when the columns are so thin, even if one sets the border width very small, what readers see is a mixture of orange and white.

With high number of bins, we notice a few things: a) the outline of the histogram becomes "ragged" (the more bins there are), b) the middle columns become razor-thin c) the width conceded by the middle columns is absorbed not by the columns at the edges but those between the peak and the edge.

I'm struggling a bit to justify this percentogram versus the typical, equal-width histogram.

Let me go down a different path.

***

In "principled" histograms, the column heights represent data densities, while the total area of the columns add up to 1. This leads us to a new understanding of the relationship between the equal-width histogram and the equal-area histogram.

We start with data density defined by (proportion of data) / (bin width). Those two values are not independent - one is fully determined by the other, given the underlying dataset. In a traditional equal-width histogram, the question is: how much of the data is found in a column of fixed width? In the new equal-area histogram, the question is: how wide is the bin that contains a fixed amount of data? In the former, the denominator is fixed while the numerator varies; the opposite occurs in the latter.

***

We also recognize that given the range of the data, there is a relationship between the the set of bin widths in the two types of histograms. In the traditional histogram, all bin widths have the same value, equal to the range of the data divided by the number of bins. Think of this as the average bin width. In an equal-area histogram, the set of bin widths varies: however, the sum of the bin widths must still add up to the range of the data. For two comparable histograms with the same number of bins, the average of the bin widths must be the same for both sets. (I'm ignoring any rounding situations in which the range of the histogram is larger than the range of the data.)

Now, consider the middle of the normal distribution where the data are dense. In the traditional histogram, the column in the middle still has width equal to the average bin width. In the equal-area histogram, the middle column has width much smaller than the average bin width. In other words, we can think of the column in the traditional histogram being broken up into many thin and slim columns in the equal-area histogram, each containing 1% of the data in the case of the percentogram.

The height of the column is the data density. In the traditional histogram, the middle column is the pooled sample of larger size; in the equal-area histogram, each of those thin and slim columns is a partition of the sample. This explains observation (a) above in which the outline of the equal-area histogram is more ragged - it's because each column contains fewer data from which to estimate the data density.

But this raggedness is artificial, sampling noise.

***

The sparse areas are more complicated still. It's also the reverse of the above. On the edges of the normal distribution, the columns of the new histogram are wider than those of the traditional histogram. So, we can think of breaking up the edge column of the new histogram into multiple columns of the traditional histogram.

The interpretation is more complicated because the data are sparse in this region. Obviously, the estimates of density on the traditional histogram in sparse regions are poor because not enough data reside in there. The density estimate on the new histogram is based on a larger sample size.

However.

Yes, however, whether the new histogram's density estimate is better depends on the shape of the tail of the distribution. A normal distribution has exponential tails, which means that the data density declines quite drastically the further we go into the tail. Therefore, the new histogram averages the data densities across a large part of the tail, wiping out the exponential shape while the traditional histogram preserves that shape - at the expense of greater sampling variability due to smaller sample sizes.

***

For what it's worth, let's look at some histograms for an exponential random variable.

Here is the traditional histogram:

Histogram_expos

The data are extremely dense on the left side while it has a long tail on the right side.

Four_percentograms_exposHere are the four equal-area histograms for 4, 10, 20 and 100 bins.

The four-bin version gives a nice summary of the shape. As the number of bins goes up, as before, the denser regions now have tall, thin spikes. Again, because of the white borders, the last histogram with 100 bins is faded where the data are densest. (So obviously, don't follow my lead, and eliminate borders if you want to use it.)

The 100-bin version looks almost the same as the traditional histogram.

***

At this stage of the exploration, I still haven't found a compelling reason to switch to equal-area hist0grams. In the denser regions, it's adding sampling noise. If I don't care about the sparser areas, specifically, the shape of the tails, maybe they provide a cleaner presentation.

 


Graph workflow and defaults wreak havoc

For the past week or 10 days, every time I visited one news site, it insisted on showing me an article about precipitation in North Platte. It's baiting me to write a post about this lamentable bar chart (link):

Northplatte_rainfall

***

This chart got problems, and the problems start with the tooling, which dictates a workflow.

I imagine what the chart designer had to deal with.

For a bar chart, the tool requires one data series to be numeric, and the other to be categorical. A four-digit year is a number, which can be treated either as numeric or categorical. In most cases, and by default, numbers are considered numeric. To make this chart, the user asked the tool to treat years as categorical.

Junkcharts_northplattedry_datatypes

Many tools treat categories as distinct entities ("nominal"), mapping each category to a distinct color. So they have 11 colors for 11 years, which is surely excessive.

This happens because the year data is not truly categorical. These eleven years were picked based on the amount of rainfall. There isn't a single year with two values, it's not even possible. The years are just irregularly spaced indices. Nevertheless, the tool misbehaves if the year data are regarded as numeric. (It automatically selects a time-series line chart, because someone's data visualization flowchart says so.) Mis-specification in order to trick the tool has consequences.

The designer's intention is to compare the current year 2023 to the driest years in history. This is obvious from the subtitle in which 2023 is isolated and its purple color is foregrounded.

Junkcharts_northplattedry_titles

How unfortunate then that among the 11 colors, this tool grabbed 4 variations of purple! I like to think that the designer wanted to keep 2023 purple, and turn the other bars gray -- but the tool thwarted this effort.

Junkcharts_northplattedry_purples

The tool does other offensive things. By default, it makes a legend for categorical data. I like the placement of the legend right beneath the title, a recognition that on most charts, the reader must look at the legend first to comprehend what's on the chart.

Not so in this case. The legend is entirely redundant. Removing the legend does not affect our cognition one bit. That's because the colors encode nothing.

Worse, the legend sows confusion because it presents the same set of years in chronological order while the bars below are sorted by amount of precipitation: thus, the order of colors in the legend differs from that in the bar chart.

Junkcharts_northplattedry_legend

I can imagine the frustration of the designer who finds out that the tool offers no option to delete the legend. (I don't know this particular tool but I have encountered tools that are rigid in this manner.)

***

Something else went wrong. What's the variable being plotted on the numeric (horizontal) axis?

The answer is inches of rainfall but the answer is actually not found anywhere on the chart. How is it possible that a graphing tool does not indicate the variables being plotted?

I imagine the workflow like this: the tool by default puts an axis label which uses the name of the column that holds the data. That column may have a name that is not reader-friendly, e.g. PRECIP. The designer edits the name to "Rainfall in inches". Being a fan of the Economist graphics style, they move the axis label to the chart title area.

The designer now works the chart title. The title is made to spell out the story, which is that North Platte is experiencing a historically dry year. Instead of mentioning rainfall, the new title emphasizes the lack thereof.

The individual steps of this workflow make a lot of sense. It's great that the title is informative, and tells the story. It's great that the axis label was fixed to describe rainfall in words not database-speak. But the end result is a confusing mess.

The reader must now infer that the values being plotted are inches of rainfall.

Further, the tool also imposes a default sorting of the bars. The bars run from longest to shortest, in this case, the longest bar has the most rainfall. After reading the title, our expectation is to find data on the Top 11 driest years, from the driest of the driest to the least dry of the driest. But what we encounter is the opposite order.

Junkcharts_northplattedry_sorting

Most graphics software behaves like this as they are plotting the ranks of the categories with the driest being rank 1, counting up. Because the vertical axis moves upwards from zero, the top-ranked item ends up at the bottom of the chart.

***

_trifectacheckup_imageMoving now from the V corner to the D corner of the Trifecta checkup (link), I can't end this post without pointing out that the comparisons shown on the chart don't work. It's the first few months of 2023 versus the full years of the others.

The fix is to plot the same number of months for all years. This can be done in two ways: find the partial year data for the historical years, or project the 2023 data for the full year.

(If the rainy season is already over, then the chart will look exactly the same at the end of 2023 as it is now. Then, I'd just add a note to explain this.)

***

Here is a version of the chart after doing away with unhelpful default settings:


Redo_junkcharts_northplattedry