Making colors and groups come alive

_numbersense_coverIn the May 2024 issue of Significance, there is an enlightening article (link, paywall) about a new measure of inflation being adopted by the U.K. government known as HCI (Household Costs Indices). This is expected to replace CPI which is the de facto standard measure used around the world. In Chapter 7 of Numbersense (link), I discuss the construction of the CPI, which critics have alleged is manipulated by public officials to be over-optimistic.

The HCI looks promising as it addresses several weaknesses in the CPI measure. First, it implements accounting for household spending on housing - this has always been a tricky subject, regarding those who own homes rather than rent. Second, it recognizes that the average inflation number, which represents the average price changes on the average basket of goods purchased by the average person, does not reflect the experience of many. The HCI measures are broken down into demographic subgroups, so it's possible to compare the HCI of retirees vs non-retirees, for example.

Then comes this multi-colored bar chart:

Sig_hci sm

***

The chart is servicable: the reader can find the story. For almost all the subgroups listed, the HCI measure comes in higher than the CPI measure (black). For the income deciles, the reader sense that the relationship is not linear, that is to say, inflation does not increase (or decrease) as income. It appears that inflation is highest at both ends of the spectrum, and lowest for those who are in deciles 6 to 8. The only subgroup for whom CPI overestimates inflation is "private renter," which totally makes sense since the CPI index previously did not account for "owner-occupier housing" cost.

This is a chart with 19 bars, and 19 colors. The colors do not encode any data at all, which is a bit wasteful. We can make the colors come alive by encoding subgroup identity. This is what the grouped bar chart looks like:

Junkcharts_redo_sig_hci_grouped_bars

While this is still messy, this version makes it a bit easier to compare across subgroups. The chart simultaneously plots four different grouping methods: by retired/not, by income deciles, by housing situation and by having children/not. Within each grouping, the segments are mutually exclusive but between the grouping, the segments are overlapping. For example, the same person can be counted in Retired, and having Children, and also some retirees have children while other don't.

***

To better display the interactions between groups and subgroups, I prefer using a dot plot.

Junkcharts_redo_sig_hci_dots

This is not a simple dot plot either. It's a grouped dot plot with four levels that correspond to each grouping method. One can see the distribution of HCI values across the subgroups within each grouping, and also compare the range of values from one group to another group.

One side benefit of using the dot plot is to get rid of the non-informative space between values 0 and 20. When using a bar chart, we have to start the bars at zero to avoid distorting the encoding. Not so for a dot plot.

P.S. In the next iteration, I'd consider flipping the axes as that might simplify labeling the subgroups.

 


Pie charts and self-sufficiency

This graphic shows up in a recent issue of Princeton alumni magazine, which has a series of pie charts.

Pu_aid sm

The story being depicted is clear: the school has been generously increasing the amount of financial aid given to students since 1998. The proportion receiving any aid went from 43% to 67% so about two out of three students who enrolled in 2023 are getting aid.

The key components of the story are the values in 1998 and 2023, and the growth trend over this period.

***

Here is an exercise worth doing. Think about how you figured out the story components.

Is it this?

Junkcharts_redo_pu_aid_1

Or is it this?

Junkcharts_redo_pu_aid_2

***

This is what I've been calling a "self-sufficiency test" (link). How much work are the visual elements doing in conveying the graph's message to you? If the visual elements aren't doing much, then the designer hasn't taken advantage of the visual medium.


Approaching the Paris Olympics

If you're looking for dataviz about the upcoming Paris Olympics, I recommend this one by the great SCMP team.

Scmp_parisianolympics100years

The impact of this piece starts with picking an engaging topic: how have the disciplines changed over the last 100 years? It capitalizes on the fact that the Games are returning to Paris after a century.

Most of the infographics contain illustrations, with the interactive device of a slider that makes it easier to compare two graphics, one for each year. Without the slider, the graphics have to be placed top and bottom, or side by side, both of which require a lot of eye movements.

Here are some bits that I particularly enjoyed:

Scmp_olympics_medaldesign

Not surprisingly, the 2024 medal is much larger and heavier than the 1924 one. The old one emphasizes sportsmanship while the new medal frontlines victory.

Scmp_olympics_polevault

Having only seen pole vaulting on modern equipment, I find it fascinating to imagine athletes using rigid wooden poles, and then having to land on their feet in the sawdust pit. Moving the slide to the left reveals the current setup, with fiberglass poles that bend, and landing mattresses. Cheekily, they also tell us where the cameras are placed. Quite a bit of the performance gain (from 3.95 to 6.22 m) can be attributed to equipment improvements.

These illustrations convince me that a lot of the performance gains over time can be attributed to better technologies, better equipment, and rule changes (that accommodate these modern innovations). For example, swimmers starting off a jumping block versus from the side of the pool.

Scmp_olympics_roadrace

Yes, and they have some statistical graphics. This one about the cycling road race is really nice. It shows that the total distance of the 2024 race is about 1/3 longer than the 1924 race. It also shows that the new route features a lot more ups and downs than the original route. The highest point of the 1924 route is higher than the new route, though. This is a great example of the conciseness of visual language.

Scmp_olympics_womenfencing

I chuckled at this one. This was the gear worn by women fencers back at the 1924 Olympics.

***

There's a lot more at SCMP (SCMP). Go take a look!


Expert handling of multiple dimensions of data

I enjoyed reading this Washington Post article about immigration in America. It features a number of graphics. Here's one graphic I particularly like:

Wpost_smallmultiplesmap

This is a small multiples of six maps, showing the spatial distribution of immigrants from different countries. The maps reveal some interesting patterns: Los Angeles is a big favorite of Guatamalans while Houston is preferred by Hondurans. Venezuelans like Salt Lake City and Denver (where there are also some Colombians and Mexicans). The breadth of the spatial distribution surprises me.

The dataset behind this graphic is complex. It's got country of origin, place of settlement, and time of arrival. The maps above collapsed the time dimension, while drawing attention to the other two dimensions.

***

They have another set of charts that highlight the time dimension while collapsing the place of settlement dimension. Here's one view of it:

Wpost_inkblot_overall

There are various names for this chart form. Stream river is one. I like to call it "inkblot", where the two sides are symmetric around the middle vertical line. The chart shows that "migrants in the U.S. immigration court" system have grown substantially since the end of the Covid-19 pandemic, during which they stopped coming.

I'm not a fan of the inkblot. One reason is visible in the following view, which showcases three Central American countries.

Wpost_inkblot_centralamerica

The main message is clear enough. The volume of immigrants from these three countries have been relatively stable over the last decade, with a bulge in the late 2000s. The recent spurt in migrants have come from other places.

But try figuring out what proportion of total immigration is accounted for by these three countries say in 2024. It's a task that is tougher than it should be, and the culprit is that the "other countries" category has been split in half with the two halves separated.

 


When should we use bar charts?

Significance_13thfl sm

Two innocent looking column charts.

These came from an article in Significance magazine (link to paywall) that applies the "difference-in-difference" technique to analyze whether the superstitious act of skipping the number 13 when numbering floors in tall buildings causes an inflation of condo pricing.

The study authors are quite careful in their analysis, recognizing that building managers who decide to relabel the 13th floor as 14th may differ in other systematic ways from those who don't relabel. They use a matching technique to construct comparison groups. The left-side chart shows one effect of matching buildings, which narrowed the gap in average square footage between the relabeled and non-relabeled groups. (Any such gap suggests potential confounding; in a hypothetical, randomized experiment, the average square footage of both groups should be statistically identical.)

The left-side chart features columns that don't start as zero, thus the visualization exaggerates the differences. The degree of exaggeration here is tame: about 150 got chopped off at the bottom, which is about 10% of the total height. But why?

***

The right-side chart is even more problematic.

This chart shows the effect of matching buildings on the average age of the buildings (measured using the average construction year). Again, the columns don't start at zero. But for this dataset, zero is a meaningless value. Never make a column chart when the zero level has no meaning!

The story is simple: by matching, the average construction year in the relabeled group was brought closer to that in the non-relabeled group. The construction year is an ordinal categorical variable, with integer values. I think a comparison of two histograms will show the message clearer, and also provide more information than jut the two average values.


Is this dataviz?

The message in this Visual Capitalist chart is simple - that big tech firms are spending a lot of cash buying back their own stock (which reduces the number of shares in the market, which pushes up their stock price - all without actually having improved their business results.)

Visualcapitalist_Magnificent_Seven_Stock-Buybacks_MAINBut is this data visualization? How does the visual design reflect the data?

The chart form is a half-pie chart, composed of five sectors, of increasing radii. In a pie chart, the data are encoded in the sector areas. But when the sectors are of different radii, it's possible that the data are found in the angles.

The text along the perimeter, coupled with the bracketing, suggests that the angles convey information - specifically, the amount of shares repurchased as a proportion of outstanding share value (market cap). On inspection, the angles are the same for all five sectors, and each one is 180 degrees divided by five, the number of companies depicted on the chart, so they convey no information, unless the company tally is deemed informative.

Each slice of the pie represents a proportion but these proportions don't add up. So the chart isn't even a half-pie chart. (Speaking of which, should the proportions in a half-pie add up to 100% or 50%?)

What about the sector areas? Since the angles are fixed, the sector areas are directly proportional to the radii. It took me a bit of time to figure this one out. The radius actually encodes the amount spent by each company on the buyback transaction. Take the ratio of Microsoft to Meta: 20 over 25 is 80%. To obtain a ratio of areas of 80%, the ratio of radii is roughly 90%; and the radius of Microsoft's sector is indeed about 90% of that of Meta. The ratio between Alphabet and Apple is similar.

The sector areas represent the dollar value of these share buybacks, although these transactions range from 0.6% to 2.9% as a proportion of outstanding share value.

Here is a more straightforward presentation of the data:

Junkcharts_redo_vc_buybacks

I'm not suggesting using this display. The sector areas in the original chart depict the data in the red bars. It's not clear to me how the story is affected by the inclusion of the market value data (gray bars).


The radial is still broken

It's puzzling to me why people like radial charts. Here is a recent set of radial charts that appear in an article in Significance magazine (link to paywall, currently), analyzing NBA basketball data.

Significance radial nba

This example is not as bad as usual (the color scheme notwithstanding) because the story is quite simple.

The analysts divided the data into three time periods: 1980-94, 1995-15, 2016-23. The NBA seasons were summarized using a battery of 15 metrics arranged in a circle. In the first period, all but 3 of the metrics sat much above the average level (indicated by the inner circle). In the second period, all 15 metrics reduced below the average, and the third period is somewhat of a mirror image of the first, which is the main message.

***

The puzzle: why prefer this circular arrangement to a rectangular arrangement?

Here is what the same graph looks like in a rectangular arrangement:

Junkcharts_redo_significanceslamdunkstats

One plausible justification for the circular arrangement is if the metrics can be clustered so that nearby metrics are semantically related.

Nevertheless, the same semantics appear in a rectangular arrangement. For example, P3-P3A are three point scores and attempts while P2-P2A are two-pointers. That is a key trend. They are neighborhoods in this arrangement just as they are in the circular arrangement.

So the real advantage is when the metrics have some kind of periodicity, and the wraparound point matters. Or, that the data are indexed to directions so north, east, south, west are meaningful concepts.

If you've found other use cases, feel free to comment below.

***


I can't end this post without returning to the colors. If one can take a negative image of the original chart, one should. Notice that the colors that dominate our attention - the yellow background, and the black lines - have no data in them: yellow being the canvass, and black being the gridlines. The data are found in the white polygons.

The other informative element, as one learns from the caption, is the "blue dashed line" that represents the value zero (i.e. average) in the standardized scale. Because the size of the image was small in the print magazine that I was reading, and they selected a dark blue encroaching on black, I had to squint hard to find the blue line.

 

 


Adjust, and adjust some more

This Financial Times report illustrates the reason why we should adjust data.

The story explores the trend in economic statistics during 14 years of governing by conservatives. One of those metrics is so-called council funding (local governments). The graphic is interactive: as the reader scrolls the page, the chart transforms.

The first chart shows the "raw" data.

Ft_councilfunding1

The vertical axis shows year-on-year change in funding. It is an index relative to the level in 2010. From this line chart, one concludes that council funding decreased from 2010 to around 2016, then grew; by 2020, funding has recovered to the level of 2010 and then funding expanded rapidly in recent years.

When the reader scrolls down, this chart is replaced by another one:

Ft_councilfunding2

This chart contains a completely different picture. The line dropped from 2010 to 2016 as before. Then, it went flat, and after 2021, it started raising, even though by 2024, the value is still 10 percent below the level in 2010.

What happened? The data journalist has taken the data from the first chart, and adjusted the values for inflation. Inflation was rampant in recent years, thus, some of the raw growth have been dampened. In economics, adjusting for inflation is also called expressing in "real terms". The adjustment is necessary because the same dollar (hmm, pound) is worth less when there is inflation. Therefore, even though on paper, council funding in 2024 is more than 25 percent higher than in 2010, inflation has gobbled up all of that and more, to the point in which, in real terms, council funding has fallen by 20 percent.

This is one material adjustment!

Wait, they have a third chart:

Ft_councilfunding3

It's unfortunate they didn't stabilize the vertical scale. Relative to the middle chart, the lowest point in this third chart is about 5 percent lower, while the value in 2024 is about 10 percent lower.

This means, they performed a second adjustment - for population change. It is a simple adjustment of dividing by the population. The numbers look worse probably because population has grown during these years. Thus, even if the amount of funding stayed the same, the money would have to be split amongst more people. The per-capita adjustment makes this point clear.

***

The final story is much different from the initial one. Not only was the magnitude of change different but the direction of change reversed.

Whenever it comes to adjustments, remember that all adjustments are subjective. In fact, choosing not to adjust is also subjective. Not adjusting is usually much worse.

 

 

 

 


Excess delay

The hot topic in New York at the moment is congestion pricing for vehicles entering Manhattan, which is set to debut during the month of June. I found this chart (link) that purports to prove the effectiveness of London's similar scheme introduced a while back.

Transportxtra_2

This is a case of the visual fighting against the data. The visual feels very busy and yet the story lying beneath the data isn't that complex.

This chart was probably designed to accompany some text which isn't available free from that link so I haven't seen it. The reader's expectation is to compare the periods before and after the introduction of congestion charges. But even the task of figuring out the pre- and post-period is taking more time than necessary. In particular, "WEZ" is not defined. (I looked this up, it's "Western Extension Zone" so presumably they expanded the area in which charges were applied when the travel rates went back to pre-charging levels.)

The one element of the graphic that raises eyebrows is the legend which screams to be read.

Transportxtra_londoncongestioncharge_legend

Why are there four colors for two items? The legend is not self-sufficient. The reader has to look at the chart itself and realize that purple is the pre-charging period while green (and blue) is the post-charging period (ignoring the distinction between CCZ and WEZ).

While we are solving this puzzle, we also notice that the bottom two colors are used to represent an unchanging quantity - which is the definition of "no congestion". This no-congestion travel rate is a constant throughout the chart and yet a lot of ink of two colors have been spilled on it. The real story is in the excess delay, which the congestion charging scheme was supposed to reduce.

The excess on the chart isn't harmless. The excess delay on the roads has been transferred to the chart reader. It actually distracts from the story the analyst is wanting to tell. Presumably, the story is that the excess delays dropped quite a bit after congestion charging was introduced. About four years later, the travel rates had creeped back to pre-charging levels, whereupon the authorities responded by extending the charging zone to WEZ (which as of the time of the chart, wasn't apparently bringing the travel rate down.)

Instead of that story, the excess of the chart makes me wonder... the roads are still highly congested with travel rates far above the level required to achieve no congestion, even after the charging scheme was introduced.

***

I started removing some of the excess from the chart. Here's the first cut:

Junkcharts_redo_transportxtra_londoncongestioncharge

This is better but it is still very busy. One problem is the choice of columns, even though the data are found strictly on the top of each column. (Besides, when I chop off the unchanging sections of the columns, I created a start-not-from-zero problem.) Also, the labeling of the months leaves much to be desired, there are too many grid lines, etc.

***

Here is the version I landed on. Instead of columns, I use lines. When lines are used, there is no need for month labels since we can assume a reader knows the structure of months within a year.

Junkcharts_redo_transportxtra_londoncongestioncharge-2

A priniciple I hold dear is not to have legends unless it is absolutely required. In this case, there is no need to have a legend. I also brought back the notion of a uncongested travel speed, with a single line (and annotation).

***

The chart raises several questions about the underlying analysis. I'd interested in learning more about "moving car observer surveys". What are those? Are they reliable?

Further, for evidence of efficacy, I think the pre-charging period must be expanded to multiple years. Was 2002 a particularly bad year?

Thirdly, assuming WEZ indicates the expansion of the program to a new geographical area, I'm not sure whether the data prior to its introduction represents the travel rate that includes the WEZ (despite no charging) or excludes it. Arguments can be made for each case so the key from a dataviz perspective is to clarify what was actually done.

 

P.S. [6-6-24] On the day I posted this, NY State Governer decided to cancel the congestion pricing scheme that was set to start at the end of June.


Prime visual story-telling

A story from the New York Times about New York City neighborhoods has been making the rounds on my Linkedin feed. The Linkedin post sends me to this interactive data visualization page (link).

Here, you will find a multi-colored map.

Nyt_newyorkneighborhoodsmap

The colors show the extant of named neighborhoods in the city. If you look closely, the boundaries between neighborhoods are blurred since it's often not clear where one neighborhood ends and where another one begins. I was expecting this effect when I recognize the names of the authors, who have previously published other maps that obsess over spatial uncertainty.

I clicked on an area for which I know there may be differing opinions:

Nyt_newyorkneighborhoods_example

There was less controversy than I expected.

***

What was the dataset behind this dataviz project? How did they get such detailed data on every block of the city? Wouldn't they have to interview a lot of residents to compile the data?

I'm quite impressed with what they did. They put up a very simple survey (emphasis on: very simple). This survey is only possible with modern browser technology. It asks the respondent to pinpoint the location of where they live, and name their neighborhood. Then it asks the respondent to draw a polygon around their residence to include the extant of the named neighborhood. This consists of a few simple mouse clicks on the map that shows the road network. Finally, the survey collects optional information on alternative names for the neighborhood, etc.

When they process the data, they assign the respondent's neighborhood name to all blocks encircled by the polygon. This creates a lot of data in a few brush strokes, so to speak. This is a small (worthwhile) tradeoff even though the respondent didn't really give an answer for every block.

***

Bear with me, I'm getting to the gist of this blog post. The major achievement isn't the page that was linked to above. The best thing the dataviz team did here is the visual story that walks the reader through insights drawn from the dataviz. You can find the visual story here.

What are the components of a hugely impressive visual story?

  • It combines data visualization with old-fashioned archival research. The historical tidbits add a lot of depth to the story.
  • It combines data visualization with old-fashioned reporting. The quotations add context to how people think about neighborhoods - something that cannot be obtained from the arms-length process of conducting an online survey.
  • It highlights curated insights from the underlying data - even walking the reader step by step through the relevant sections of the dataviz that illustrate these insights.

At the end of this story, some fraction of users may be tempted to go back to the interactive dataviz to search for other insights, or obtain answers to their personalized questions. They are much better prepared to do so, having just seen how to use the interactive tool!

***

The part of the visual story I like best is toward the end. Instead of plotting all the data on the map, they practice some restraint, and filter the data. They show the boundaries that have reached at least a certain level of consensus among the respondents.

The following screenshot shows those areas for which at least 90% agree.

Nyt_newyorkneighborhoods_90pc

Pardon the white text box, I wasn't able to remove it.

***

One last thing...

Every time an analyst touches data, or does something with data, s/he imposes assumptions, and sometimes, these assumptions are so subtle that even the analyst may not have noticed. Frequently, these assumptions are baked into the analytical "models," which is why they may fall through the cracks.

One such assumption in making this map is that every block in the city belongs to at least one named neighborhood. An alternative assumption is that neighborhoods are named only because certain blocks have things in common, and because these naming events occur spontaneously, it's perfectly ok to have blocks that aren't part of any named neighborhood.