Simple presentations

In the previous post, I looked at this chart that shows the distributions of four subgroups found in a dataset:

Davidcurran_originenglishwords

This chart takes quite some effort to decipher, as does another version I featured.

The key messages appear to be: (i) most English words are of Germanic origin, (ii) the most popular English words are even more skewed towards Germanic origin, (iii) words of French origin started showing up around rank 50, those of Latin origin around rank 250.

***

If we are making a graphic for presentation, we can simplify the visual clutter tremendously by - hmmm - a set of pie charts.

Junkcharts_redo_originenglishwords_pies

For those allergic to pies, here's a stacked column chart:

Junkcharts_redo_originenglishwords_columns

Both of these can be thought of as "samples" from the original chart, selected to highlight shifts in the relative proportions.

Davidcurran_originenglishwords_sampled

I also reversed the direction of the horizontal axis as I think the story is better told starting from the whole dataset and honing in on subsets.

 

P.S. [1/10/2025] A reader who has expertise in this subject also suggested a stacked column chart with reversed axis in a comment, so my recommendation here is confirmed.


Gaining precision by deleting data

The title is a bit of a paradox, isn't it? When we want more precise knowledge about something, we want to gather more data, at greater granularity. But it's not that simple.

Here is the famous "wind map" by the New York Times (link) showing vote margin shifts in the U.S. Presidential elections from 2020 to 2024, at the county level. A red arrow pointing rightward indicates a county in which the voters shifted toward the Republican candidate (Trump). It paints the red wave story loud and clear.

Nyt_votemarginshiftmap

Even though every county is on the map, this map alone doesn't answer all possible questions about vote margin shift. For example, someone might be interested in the vote margin shift in counties with high Hispanic populations. It's impossible to learn this from the above map, even if one has a list of the names of these counties.

The answer is found in the following map, published by NBC News here:

Nbcnews_votemarginshiftmap_hispanics

The story is also very clear. This map can be thought of as the NYT map minus the counties that have negligible Hispanic populations. By deleting all unrelated data, the designer highlights the story about Hispanic voters.

The reader can use the tab up top to see partial shift maps that emphasize different demographic groups. Well done!

 

 

 

 


Fantastic auto show from the Bloomberg crew

I really enjoyed the charts in this Bloomberg feature on the state of Japanese car manufacturers in the Southeast Asian and Chinese markets (link). This article contains five charts, each of which is both engaging and well-produced.

***

Each chart has a clear message, and the visual display is clearly adapted for purpose.

The simplest chart is the following side-by-side stacked bar chart, showing the trend in share of production of cars:

Bloomberg_japancars_production

Back in 1998, Japan was the top producer, making about 22% of all passenger cars in the world. China did not have much of a car industry. By 2023, China has dominated global car production, with almost 40% of share. Japan has slipped to second place, and its share has halved.

The designer is thoughtful about each label that is placed on the chart. If something is not required to tell the story, it's not there. Consistently across all five charts, they code Japan in red, and China in a medium gray color. (The coloring for the rest of the world is a bit inconsistent; we'll get to that later.)

Readers may misinterpret the cause of this share shift if this were the only chart presented to them. By itself, the chart suggests that China simply "stole" share from Japan (and other countries). What is true is that China has invested in a car manufacturing industry. A more subtle factor is that the global demand for cars has grown, with most of the growth coming from the Chinese domestic market and other emerging markets - and many consumers favor local brands. Said differently, the total market size in 2023 is much higher than that in 1998.

***

Bloomberg also made a chart that shows market share based on demand:

Bloomberg_japancars_marketshares

This is a small-multiples chart consisting of line charts. Each line chart shows market share trends in five markets (China and four Southeast Asian nations) from 2019 to 2024. Take the Chinese market for example. The darker gray line says Chinese brands have taken 20 percent additional market share since 2019; note that the data series is cumulative over the entire window. Meanwhile, brands from all other countries lost market share, with the Japanese brands (in red) losing the most.

The numbers are relative, which means that the other brands have not necessarily suffered declines in sales. This chart by itself doesn't tell us what happened to sales; all we know is the market shares of brands from different countries relative to their baseline market share in 2019. (Strange period to pick out as it includes the entire pandemic.)

The designer demonstrates complete awareness of the intended message of the chart. The lines for Chinese and Japanese brands were bolded to highlight the diverging fortunes, not just in China, but also in Southeast Asia, to various extents.

On this chart, the designer splits out US and German brands from the rest of the world. This is an odd decision because the categorization is not replicated in the other four charts. Thus, the light gray color on this chart excludes U.S. and Germany while the same color on the other charts includes them. I think they could have given U.S. and Germany their own colors throughout.

***

The primacy of local brands is hinted at in the following chart showing how individual brands fared in each Southeast Asian market:

Bloomberg_japancars_seasiamarkets

 

This chart takes the final numbers from the line charts above, that is to say, the change in market share from 2019 to 2024, but now breaks them down by individual brand names. As before, the red bubbles represent Japanese brands, and the gray bubbles Chinese brands. The American and German brands are lumped in with the rest of the world and show up as light gray bubbles.

I'll discuss this chart form in a next post. For now, I want to draw your attention to the Malaysia market which is the last row of this chart.

What we see there are two dominant brands (Perodua, Proton), both from "rest of the world" but both brands are Malaysian. These two brands are the biggest in Malaysia and they account for two of the three highest growing brands there. The other high-growth brand is Chery, which is a Chinese brand; even though it is growing faster, its market share is still much smaller than the Malaysian brands, and smaller than Toyota and Honda. Honda has suffered a lot in this market while Toyota eked out a small gain.

The impression given by this bubble chart is that Chinese brands have not made much of a dent in Malaysia. But that would not be correct, if we believe the line chart above. According to the line chart, Chinese brands roughly earned the same increase in market share (about 3%) as "other" brands.

What about the bubble chart might be throwing us off?

It seems that the Chinese brands were starting from zero, thus the growth is the whole bubble. For the Malaysian brands, the growth is in the outer ring of the bubbles, and the larger the bubble, the thinner is the ring. Our attention is dominated by the bubble size which represents a snapshot in the ending year, providing no information about the growth (which is shown on the horizontal axis).

***

For more discussion of Bloomberg graphics, see here.


the wtf moment

You're reading some article that contains a standard chart. You're busy looking for the author's message on the chart. And then, the wtf moment strikes.

It's the moment when you discover that the chart designer has done something unexpected, something that changes how you should read the chart. It's when you learn that time is running right to left, for example. It's when you realize that negative numbers are displayed up top. It's when you notice that the columns are ordered by descending y-value despite time being on the x-axis.

Tell me about your best wtf moments!

***

The latest case of the wtf moment occurred to me when I was reading Rajiv Sethi's blog post on his theory that Kennedy voters crowded out Cheney voters in the 2024 Presidential election (link). Was the strategy to cosy up to Cheney and push out Kennedy wise?

In the post, Rajiv has included this chart from Pew:

Pew_science_confidence

The chart is actually about the public's confidence in scientists. Rajiv summarizes the message as: 'Public confidence in scientists has fallen sharply since the early days of the pandemic, especially among Republicans. There has also been a shift among Democrats, but of a slightly different kind—the proportion with “a great deal” of trust in scientists to act in our best interests rose during the first few months of the pandemic but has since fallen back.'

Pew produced a stacked column chart, with three levels for each demographic segment and month of the survey. The question about confidence in scientists admits three answers: a great deal, a fair amount, and not too much/None at all. [It's also possible that they offered 4 responses, with the bottom two collapsed as one level in the visual display.]

As I scan around the chart understanding the data, suddenly I realized that the three responses were not listed in the expected order. The top (light blue) section is the middling response of "a fair amount", while the middle (dark blue) section is the "a great deal" answer.

wtf?

***

Looking more closely, this stacked column chart has bells and whistles, indicating that the person who made it expended quite a bit of effort. Whether it's worthwhile effort, it's for us readers to decide.

By placing "a great deal" right above the horizon, the designer made it easier to see the trend in the proportion responding with "a great deal". It's also easy to read the trend of those picking the "negative" response because of how the columns are anchored. In effect, the designer is expressing the opinion that the middle group (which is also the most popular answer) is just background, and readers should not pay much attention to it.

The designer expects readers to care about one other trend, that of the "top 2 box" proportion. This is why sitting atop the columns are the data labels called "NET" which is the sum of those responding "a great deal" or "a fair amount".

***

For me, it's interesting to know whether the prior believers in science who lost faith in science went down one notch or two. Looking at the Republicans, the proportion of "a great deal" roughly went down by 10 percentage points while the proportion saying "Not too much/None at all" went up about 13%. Thus, the shift in the middle segment wasn't enough to explain all of the jump in negative sentiment; a good portion went from believer to skeptic during the pandemic.

As for Democrats, the proportion of believers also dropped by about 10 percentage points while the proportion saying "a fair amount" went up by almost 10 percent, accounting for most of the shift. The proportion of skeptics increased by about 2 percent.

So, for Democrats, I'm imagining a gentle slide in confidence that applies to the whole distribution while for Republicans, if someone loses confidence, it's likely straight to the bottom.

If I'm interested in the trends of all three responses, it's more effective to show the data in a panel like this:

Junkcharts_redo_pew_scientists

***

Remember to leave a comment when you hit your wtf moment next time!

 


Election coverage prompts good graphics

The election broadcasts in the U.S. are full-day affairs, and they make a great showcase for interactive graphics.

The election setting is optimal as it demands clear graphics that are instantly digestible. Anything else would have left viewers confused or frustrated.

The analytical concepts conveyed by the talking heads during these broadcasts are quite sophisticated, and they did a wonderful job at it.

***

One such concept is the value of comparing statistics against a benchmark (or, even multiple benchmarks). This analytics tactic comes in handy in the 2024 election especially, because both leading candidates are in some sense incumbents. Kamala was part of the Biden ticket in 2020, while Trump competed in both 2016 and 2020 elections.

Msnbc_2024_ga_douglas

In the above screenshot, taken around 11 pm on election night, the MSNBC host (that looks like Steve K.) was searching for Kamala votes because it appeared that she was losing the state of Georgia. The question of the moment: were there enough votes left for her to close the gap?

In the graphic (first numeric column), we were seeing Kamala winning 65% of the votes, against Trump's 34%, in Douglas county in Georgia. At first sight, one would conclude that Kamala did spectacularly well here.

But, is 65% good enough? One can't answer this question without knowing past results. How did Biden-Harris do in the 2020 election when they won the presidency?

The host touched the interactive screen to reveal the second column of numbers, which allows viewers to directly compare the results. At the time of the screenshot, with 94% of the votes counted, Kamala was performing better in this county than they did in 2020 (65% vs 62%). This should help her narrow the gap.

If in 2020, they had also won 65% of the Douglas county votes, then, we should not expect the vote margin to shrink after counting the remaining 6% of votes. This is why the benchmark from 2020 is crucial. (Of course, there is still the possibility that the remaining votes were severely biased in Kamala's favor but that would not be enough, as I'll explain further below.)

All stations used this benchmark; some did not show the two columns side by side, making it harder to do the comparison.

Interesting side note: Douglas county has been rapidly shifting blue in the last two decades. The proportion of whites in the county dropped from 76% to 35% since 2000 (link).

***

Though Douglas county was encouraging for Kamala supporters, the vote gap in the state of Georgia at the time was over 130,000 in favor of Trump. The 6% in Douglas represented only about 4,500 votes (= 70,000*0.06/0.94). Even if she won all of them (extremely unlikely), it would be far from enough.

So, the host flipped to Fulton county, the most populous county in Georgia, and also a Democratic stronghold. This is where the battle should be decided.

Msnbc_2024_ga_fulton

Using the same format - an interactive version of a small-multiples arrangement, the host looked at the situation in Fulton. The encouraging sign was that 22% of the votes here had not yet been counted. Moreover, she captured 73% of those votes that had been tallied. This was 10 percentage points better than her performance in Douglas, Ga. So, we know that many more votes were coming in from Fulton, with the vast majority being Democratic.

But that wasn't the full story. We have to compare these statistics to our 2020 benchmark. This comparison revealed that she faced a tough road ahead. That's because Biden-Harris also won 73% of the Fulton votes in 2020. She might not earn additional votes here that could be used to close the state-wide gap.

If the 73% margin held to the end of the count, she would win 90,000 additional votes in Fulton but Trump would win 33,000, so that the state-wide gap should narrow by 57,000 votes. Let's round that up, and say Fulton halved Trump's lead in Georgia. But where else could she claw back the other half?

***

From this point, the analytics can follow one of two paths, which should lead to the same conclusion. The first path runs down the list of Georgia counties. The second path goes up a level to a state-wide analysis, similar to what was done in my post on the book blog (link).

Cnn_2024_ga

Around this time, Georgia had counted 4.8 million votes, with another 12% outstanding. So, about 650,000 votes had not been assigned to any candidate. The margin was about 135,000 in Trump's favor, which amounted to 20% of the outstanding votes. But that was 20% on top of her base value of 48% share, meaning she had to claim 68% of all remaining votes. (If in the outstanding votes, she got the same share of 48% as in the already-counted, then she would lose the state with the same vote margin as currently seen, and would lose by even more absolute votes.)

The reason why the situation was more hopeless than it even sounded here is that the 48% base value came from the 2024 votes that had been counted; thus, for example, it included her better-than-benchmark performance in Douglas county. She would have to do even better to close the gap! In Fulton, which has the biggest potential, she was unable to push the vote share above the 2020 level.

That's why in my book blog (link), I suggested that the networks could have called Georgia (and several other swing states) earlier, if they used "numbersense" rather than mathematical impossibility as the criterion.

***

Before ending, let's praise the unsung heroes - the data analysts who worked behind the scenes to make these interactive graphics possible.

The graphics require data feeds, which cover a broad scope, from real-time vote tallies to total votes casted, both at the county level and the state level. While the focus is on the two leading candidates, any votes going to other candidates have to be tabulated, even if not displayed. The talking heads don't just want raw vote counts; in order to tell the story of the election, they need some understanding of how many votes are still to be counted, where they are coming from, what's the partisan lean on those votes, how likely is the result going to deviate from past elections, and so on.

All those computations must be automated, but manually checked. The graphics software has to be reliable; the hosts can touch any part of the map to reveal details, and it's not possible to predict all of the user interactions in advance.

Most importantly, things will go wrong unexpectedly during election night so many data analysts were on standby, scrambling to fix issues like breakage of some data feed from some county in some state.


Using disaggregation in dataviz

This chart appears in a journal article on the use of AI (artificial intelligence) in healthcare (link).

Ai_healthcare_barchart

It's a stacked bar chart in which each bar is subdivided into four segments. The authors are interested in the relative frequency of research using AI by disease type. The chart only shows the top 10 disease types.

What is unusual is that the subdivisions are years. So these authors revealed four years of journal articles, and while the overall ranking of the disease types is by the aggregated four-year total counts, each total count has been disaggregated by color so readers can also see the annual counts.

***

A slight rearrangement yields the following:

Junkcharts_redo_aihealthcare

Most readers will only care about the left chart showing the total counts. More invested readers may consider the colored charts that show annual totals. These are arranged so that the annual counts are easily read and compared.

***

One annoying aspect of this type of presentation is that in almost all cases, the top 10 types in aggregate will not be the top 10 types by individual year. In some of those years, I expect that the 10 types shown do not include all of the top 10 types for a particular year.


Pie charts and self-sufficiency

This graphic shows up in a recent issue of Princeton alumni magazine, which has a series of pie charts.

Pu_aid sm

The story being depicted is clear: the school has been generously increasing the amount of financial aid given to students since 1998. The proportion receiving any aid went from 43% to 67% so about two out of three students who enrolled in 2023 are getting aid.

The key components of the story are the values in 1998 and 2023, and the growth trend over this period.

***

Here is an exercise worth doing. Think about how you figured out the story components.

Is it this?

Junkcharts_redo_pu_aid_1

Or is it this?

Junkcharts_redo_pu_aid_2

***

This is what I've been calling a "self-sufficiency test" (link). How much work are the visual elements doing in conveying the graph's message to you? If the visual elements aren't doing much, then the designer hasn't taken advantage of the visual medium.


Expert handling of multiple dimensions of data

I enjoyed reading this Washington Post article about immigration in America. It features a number of graphics. Here's one graphic I particularly like:

Wpost_smallmultiplesmap

This is a small multiples of six maps, showing the spatial distribution of immigrants from different countries. The maps reveal some interesting patterns: Los Angeles is a big favorite of Guatamalans while Houston is preferred by Hondurans. Venezuelans like Salt Lake City and Denver (where there are also some Colombians and Mexicans). The breadth of the spatial distribution surprises me.

The dataset behind this graphic is complex. It's got country of origin, place of settlement, and time of arrival. The maps above collapsed the time dimension, while drawing attention to the other two dimensions.

***

They have another set of charts that highlight the time dimension while collapsing the place of settlement dimension. Here's one view of it:

Wpost_inkblot_overall

There are various names for this chart form. Stream river is one. I like to call it "inkblot", where the two sides are symmetric around the middle vertical line. The chart shows that "migrants in the U.S. immigration court" system have grown substantially since the end of the Covid-19 pandemic, during which they stopped coming.

I'm not a fan of the inkblot. One reason is visible in the following view, which showcases three Central American countries.

Wpost_inkblot_centralamerica

The main message is clear enough. The volume of immigrants from these three countries have been relatively stable over the last decade, with a bulge in the late 2000s. The recent spurt in migrants have come from other places.

But try figuring out what proportion of total immigration is accounted for by these three countries say in 2024. It's a task that is tougher than it should be, and the culprit is that the "other countries" category has been split in half with the two halves separated.

 


The radial is still broken

It's puzzling to me why people like radial charts. Here is a recent set of radial charts that appear in an article in Significance magazine (link to paywall, currently), analyzing NBA basketball data.

Significance radial nba

This example is not as bad as usual (the color scheme notwithstanding) because the story is quite simple.

The analysts divided the data into three time periods: 1980-94, 1995-15, 2016-23. The NBA seasons were summarized using a battery of 15 metrics arranged in a circle. In the first period, all but 3 of the metrics sat much above the average level (indicated by the inner circle). In the second period, all 15 metrics reduced below the average, and the third period is somewhat of a mirror image of the first, which is the main message.

***

The puzzle: why prefer this circular arrangement to a rectangular arrangement?

Here is what the same graph looks like in a rectangular arrangement:

Junkcharts_redo_significanceslamdunkstats

One plausible justification for the circular arrangement is if the metrics can be clustered so that nearby metrics are semantically related.

Nevertheless, the same semantics appear in a rectangular arrangement. For example, P3-P3A are three point scores and attempts while P2-P2A are two-pointers. That is a key trend. They are neighborhoods in this arrangement just as they are in the circular arrangement.

So the real advantage is when the metrics have some kind of periodicity, and the wraparound point matters. Or, that the data are indexed to directions so north, east, south, west are meaningful concepts.

If you've found other use cases, feel free to comment below.

***


I can't end this post without returning to the colors. If one can take a negative image of the original chart, one should. Notice that the colors that dominate our attention - the yellow background, and the black lines - have no data in them: yellow being the canvass, and black being the gridlines. The data are found in the white polygons.

The other informative element, as one learns from the caption, is the "blue dashed line" that represents the value zero (i.e. average) in the standardized scale. Because the size of the image was small in the print magazine that I was reading, and they selected a dark blue encroaching on black, I had to squint hard to find the blue line.

 

 


Aligning V and Q by way of D

In the Trifecta Checkup (link), there is a green arrow between the Q (question) and V (visual) corners, indicating that they should align. This post illustrates what I mean by that.

I saw the following chart in a Washington Post article comparing dairy milk and plant-based "milks".

Vitamins

The article contains a whole series of charts. The one shown here focuses on vitamins.

The red color screams at the reader. At first, it appears to suggest that dairy milk is a standout on all four categories of vitamins. But that's not what the data say.

Let's take a look at the chart form: it's a grid of four plots, each containing one square for each of four types of "milk". The data are encoded in the areas of the squares. The red and green colors represent category labels and do not reflect data values.

Whenever we make bubble plots (the closest relative of these square plots), we have to solve a scale problem. What is the relationship between the scales of the four plots?

I noticed the largest square is the same size across all four plots. So, the size of each square is made relative to the maximum value in each plot, which is assigned a fixed size. In effect, the data encoding scheme is that the areas of the squares show the index values relative to the group maximum of each vitamin category. So, soy milk has 72% as much potassium as dairy milk while oat and almond milks have roughly 45% as much as dairy.

The same encoding scheme is applied also to riboflavin. Oat milk has the most riboflavin, so its square is the largest. Soy milk is 80% of oat, while dairy has 60% of oat.

***

_trifectacheckup_imageLet's step back to the Trifecta Checkup (link). What's the question being asked in this chart? We're interested in the amount of vitamins found in plant-based milk relative to dairy milk. We're less interested in which type of "milk" has the highest amount of a particular vitamin.

Thus, I'd prefer the indexing tied to the amount found in dairy milk, rather than the maximum value in each category. The following set of column charts show this encoding:

Junkcharts_redo_msn_dairyplantmilks_2

I changed the color coding so that blue columns represent higher amounts than dairy while yellow represent lower.

From the column chart, we find that plant-based "milks" contain significantly less potassium and phosphorus than dairy milk while oat and soy "milks" contain more riboflavin than dairy. Almond "milk" has negligible amounts of riboflavin and phosphorus. There is vritually no difference between the four "milk" types in providing vitamin D.

***

In the above redo, I strengthen the alignment of the Q and V corners. This is accomplished by making a stop at the D corner: I change how the raw data are transformed into index values. 

Just for comparison, if I only change the indexing strategy but retain the square plot chart form, the revised chart looks like this:

Junkcharts_redo_msn_dairyplantmilks_1

The four squares showing dairy on this version have the same size. Readers can evaluate the relative sizes of the other "milk" types.