Visualizing extremes

The New York Times published this chart to illustrate the extreme ocean heat driving the formation of hurricane Milton (link):

Nyt_oceanheatmilton

The chart expertly shows layers of details.

The red line tracks the current year's data on ocean heat content up to yesterday.

Meaning is added through the gray line that shows the average trajectory of the past decade. With the addition of this average line, we can readily see how different the current year's data is from the past. In particular, we see that the current season can be roughly divided into three parts: from May to mid June, and also from August to October, the ocean heat this year was quite a bit higher than the 10-year average, whereas from mid June to August, it was just above average.

Even more meaning is added when all the individual trajectories from the last decade are shown (in light gray lines). With this addition, we can readily learn how extreme is this year's data. For the post-August period, it's clear that the Gulf of Mexico is the hottest it's been in the past decade. Also, this extreme is not too far from the previous extreme. On the other hand, the extreme in late May-early June is rather more scary. 

 

 


Aligning the visual and the message to hot things up

The headline of this NBC News chart (link) tells readers that Phoenix (Arizona) has been very, very hot this year. It has over 120 days in which the average temperature exceeded 100F (38 C).

Nbcnews_phoenix_tmax

It's not obvious how extreme this situation is. To help readers, it would be useful to add some kind of reference points.

A couple of possibilities come to mind:

First, how many days are depicted in the chart? Since there is one cell for each day of the year, and the day of week is plotted down the vertical axis, we just need to count the number of columns. There are 38 columns, but the first column has one missing cell while the last column has only 3 cells. Thus, the number of days depicted is (36*7)+6+3 = 261. So, the average temperature in Phoenix exceeded 100F on about 46% of the days of the year thus far.

That sounds like a high number. For a better reference point, we'd also like to know the historical average. Is Phoenix just a very hot place? Is 2024 hotter than usual?

***

Let's walk through how one reads the Phoenix "heatmap".

We already figured out that each column represents a week of the year, and each row shows a cross-section of a given day of week throughout the year.

The first column starts on a Monday because the first day of 2024 falls on a Monday. The last column ends on a Tuesday, which corresponds to Sept 17, 2024, the last day of data when this chart was created.

The columns are grouped into months, although such division is complicated by the fact that the number of days in a month (except for a leap month) isn't ever divisible by seven. The designer subtly inserted a thicker border between months. This feature allows readers to comment on the average temperature in a given month. It also lets readers learn quickly that we are two weeks and three days into September.

The color legend explains that temperature readings range from yellow (lower) to red (higher). The range of average daily temperatures during 2024 was 54-118F (12-48C). The color scale is progressive.

Nbcnews_phoenix_colorlegend

Given that 100F is used as a threshold to define "hot days," it makes sense to accentuate this in the visual presentation. For example:

Junkcharts_redo_nbcnewsphoenixmaxtemp

Here, all days with maximum temperature at 100F or above have a red hue.


Approaching the Paris Olympics

If you're looking for dataviz about the upcoming Paris Olympics, I recommend this one by the great SCMP team.

Scmp_parisianolympics100years

The impact of this piece starts with picking an engaging topic: how have the disciplines changed over the last 100 years? It capitalizes on the fact that the Games are returning to Paris after a century.

Most of the infographics contain illustrations, with the interactive device of a slider that makes it easier to compare two graphics, one for each year. Without the slider, the graphics have to be placed top and bottom, or side by side, both of which require a lot of eye movements.

Here are some bits that I particularly enjoyed:

Scmp_olympics_medaldesign

Not surprisingly, the 2024 medal is much larger and heavier than the 1924 one. The old one emphasizes sportsmanship while the new medal frontlines victory.

Scmp_olympics_polevault

Having only seen pole vaulting on modern equipment, I find it fascinating to imagine athletes using rigid wooden poles, and then having to land on their feet in the sawdust pit. Moving the slide to the left reveals the current setup, with fiberglass poles that bend, and landing mattresses. Cheekily, they also tell us where the cameras are placed. Quite a bit of the performance gain (from 3.95 to 6.22 m) can be attributed to equipment improvements.

These illustrations convince me that a lot of the performance gains over time can be attributed to better technologies, better equipment, and rule changes (that accommodate these modern innovations). For example, swimmers starting off a jumping block versus from the side of the pool.

Scmp_olympics_roadrace

Yes, and they have some statistical graphics. This one about the cycling road race is really nice. It shows that the total distance of the 2024 race is about 1/3 longer than the 1924 race. It also shows that the new route features a lot more ups and downs than the original route. The highest point of the 1924 route is higher than the new route, though. This is a great example of the conciseness of visual language.

Scmp_olympics_womenfencing

I chuckled at this one. This was the gear worn by women fencers back at the 1924 Olympics.

***

There's a lot more at SCMP (SCMP). Go take a look!


Expert handling of multiple dimensions of data

I enjoyed reading this Washington Post article about immigration in America. It features a number of graphics. Here's one graphic I particularly like:

Wpost_smallmultiplesmap

This is a small multiples of six maps, showing the spatial distribution of immigrants from different countries. The maps reveal some interesting patterns: Los Angeles is a big favorite of Guatamalans while Houston is preferred by Hondurans. Venezuelans like Salt Lake City and Denver (where there are also some Colombians and Mexicans). The breadth of the spatial distribution surprises me.

The dataset behind this graphic is complex. It's got country of origin, place of settlement, and time of arrival. The maps above collapsed the time dimension, while drawing attention to the other two dimensions.

***

They have another set of charts that highlight the time dimension while collapsing the place of settlement dimension. Here's one view of it:

Wpost_inkblot_overall

There are various names for this chart form. Stream river is one. I like to call it "inkblot", where the two sides are symmetric around the middle vertical line. The chart shows that "migrants in the U.S. immigration court" system have grown substantially since the end of the Covid-19 pandemic, during which they stopped coming.

I'm not a fan of the inkblot. One reason is visible in the following view, which showcases three Central American countries.

Wpost_inkblot_centralamerica

The main message is clear enough. The volume of immigrants from these three countries have been relatively stable over the last decade, with a bulge in the late 2000s. The recent spurt in migrants have come from other places.

But try figuring out what proportion of total immigration is accounted for by these three countries say in 2024. It's a task that is tougher than it should be, and the culprit is that the "other countries" category has been split in half with the two halves separated.

 


Adjust, and adjust some more

This Financial Times report illustrates the reason why we should adjust data.

The story explores the trend in economic statistics during 14 years of governing by conservatives. One of those metrics is so-called council funding (local governments). The graphic is interactive: as the reader scrolls the page, the chart transforms.

The first chart shows the "raw" data.

Ft_councilfunding1

The vertical axis shows year-on-year change in funding. It is an index relative to the level in 2010. From this line chart, one concludes that council funding decreased from 2010 to around 2016, then grew; by 2020, funding has recovered to the level of 2010 and then funding expanded rapidly in recent years.

When the reader scrolls down, this chart is replaced by another one:

Ft_councilfunding2

This chart contains a completely different picture. The line dropped from 2010 to 2016 as before. Then, it went flat, and after 2021, it started raising, even though by 2024, the value is still 10 percent below the level in 2010.

What happened? The data journalist has taken the data from the first chart, and adjusted the values for inflation. Inflation was rampant in recent years, thus, some of the raw growth have been dampened. In economics, adjusting for inflation is also called expressing in "real terms". The adjustment is necessary because the same dollar (hmm, pound) is worth less when there is inflation. Therefore, even though on paper, council funding in 2024 is more than 25 percent higher than in 2010, inflation has gobbled up all of that and more, to the point in which, in real terms, council funding has fallen by 20 percent.

This is one material adjustment!

Wait, they have a third chart:

Ft_councilfunding3

It's unfortunate they didn't stabilize the vertical scale. Relative to the middle chart, the lowest point in this third chart is about 5 percent lower, while the value in 2024 is about 10 percent lower.

This means, they performed a second adjustment - for population change. It is a simple adjustment of dividing by the population. The numbers look worse probably because population has grown during these years. Thus, even if the amount of funding stayed the same, the money would have to be split amongst more people. The per-capita adjustment makes this point clear.

***

The final story is much different from the initial one. Not only was the magnitude of change different but the direction of change reversed.

Whenever it comes to adjustments, remember that all adjustments are subjective. In fact, choosing not to adjust is also subjective. Not adjusting is usually much worse.

 

 

 

 


Do you want a taste of the new hurricane cone?

The National Hurricane Center (NHC) put out a press release (link to PDF) to announce upcoming changes (in August 2024) to their "hurricane cone" map. This news was picked up by Miami Herald (link).

New_hurricane_map_2024

The above example is what the map looks like. (The data are probably fake since the new map is not yet implemented.)

The cone map has been a focus of research because experts like Alberto Cairo have been highly critical of its potential to mislead. Unfortunately, the more attention paid to it, the more complicated the map has become.

The latest version of this map comprises three layers.

The bottom layer is the so-called "cone". This is the white patch labeled below as the "potential track area (day 1-5)".  Researchers dislike this element because they say readers tend to misinterpret the cone as predicting which areas would be damaged by hurricane winds when the cone is intended to depict the uncertainty about the path of the hurricane. Prior criticism has led the NHC to add the text at the top of the chart, saying "The cone contains the probable path of the storm center but does not show the size of the storm. Hazardous conditions can occur outside of the cone."

The middle layer are the multi-colored bits. Two of these show the areas for which the NHC has issued "watches" and "warnings". All of these color categories represent wind speeds at different times. Watches and warnings are forecasts while the other colors indicate "current" wind speeds. 

The top layer consists of black dots. These provide a single forecast of the most likely position of the storm, with the S, H, M labels indicating the most likely range of wind speeds at forecast times.

***

Let's compare the new cone map to a real hurricane map from 2020. (This older map came from a prior piece also by NHC.)

Old_hurricane_map_2020

Can we spot the differences?

To my surprise, the differences were minor, in spite of the pre-announced changes.

The first difference is a simplification. Instead of dividing the white cone (the bottom layer) into two patches -- a white patch for days 1-3, and a dotted transparent patch for days 4-5, the new map aggregates the two periods. Visually, simplifying makes the map less busy but loses the implicit acknowledge found in the old map that forecasts further out are not as reliable.

The second point of departure is the addition of "inland" warnings and watches. Notice how the red and blue areas on the old map hugged the coastline while the red and blue areas on the new map reach inland.

Both changes push the bottom layer, i.e. the cone, deeper into the background. It's like a shrink-flation ice cream cone that has a tiny bit of ice cream stuffed deep in its base.

***

How might one improve the cone map? I'd start by dismantling the layers. The three layers present answers to different problems, albeit connected.

Let's begin with the hurricane forecasting problem. We have the current location of the storm, and current measurements of wind speeds around its center. As a first requirement, a forecasting model predicts the path of the storm in the near future. At any time, the storm isn't a point in space but a "cloud" around a center. The path of the storm traces how that cloud will move, including any expansion or contraction of its radius.

That's saying a lot. To start with, a forecasting model issues the predicted average path -- the expected path of the storm's center. This path is (not competently) indicated by the black dots in the top layer of the cone map. These dots offer only a sampled view of the average path.

Not surprisingly, there is quite a bit of uncertainty about the future path of any storm. Many models simulate future worlds, generating many predictions of the average paths. The envelope of the most probable set of paths is the "cone". The expanding width of the cone over time reflects the higher uncertainty of our predictions further into the future. Confusingly, this cone expansion does not depict spatial expansion of either the storm's size or the potential areas that may suffer the greatest damage. Both of those tend to shrink as hurricanes move inland.

Nevertheless, the cone and the black dots are connected. The path drawn out by the black dots should be the average path of the center of the storm.

The forecasting model also generates estimates of wind speeds. Those are given as labels inside the black dots. The cone itself offers no information about wind speeds. The map portrays the uncertainty of the position of the storm's center but omits the uncertainty of the projected wind speeds.

The middle layer of colored patches also inform readers about model projections - but in an interpreted manner. The colors portray hurricane warnings and watches for specific areas, which are based on projected wind speeds from the same forecasting models described above. The colors represent NHC's interpretation of these model outputs. Each warning or watch simultaneously uses information on location, wind speed and time. The uncertainty of the projected values is suppressed.

I think it's better to use two focused maps instead of having one that captures a bit of this and a bit of that.

One map can present the interpreted data, and show the areas that have current warnings and watches. This map is about projected wind strength in the next 1-3 days. It isn't about the center of the storm, or its projected path. Uncertainty can be added by varying the tint of the colors, reflecting the confidence of the model's prediction.

Another map can show the projected path of the center of the storm, plus the cone of uncertainty around that expected path. I'd like to bring more attention to the times of forecasting, perhaps shading the cone day by day, if the underlying model has this level of precision.

***

Back in 2019, I wrote a pretty long post about these cone maps. Well worth revisiting today!


Neither the forest nor the trees

On the NYT's twitter feed, they featured an article titled "These Seven Tech Stocks are Driving the Market". The first sentence of the article reads: "The S&P 500 is at an all-time high, and investors have just a handful of stocks to thank for it."

Without having seen any data, I'd surmise from that line that (a) the S&P 500 index has gone up recently, and (b) most if not all of the gain in the index can be attributed to gains in the tech stocks mentioned in the headline. (For purists, a handful is five, not seven.)

The chart accompanying the tweet is a treemap:

Nyt_magnificentseven

The treemap is possibly the most overhyped chart type of the modern era. Its use here is tangential to the story of surging market value. That's because the treemap presents a snapshot of the composition of the index, but contains nothing about the trend (change over time) of the average index value or of its components.

***

Even in representing composition, the treemap is inferior to, gasp, a pie chart. Of course, we can only use a pie chart for small numbers of components. The following illustration takes the data from the NYT chart on the Magnificent Seven tech stocks, and compares a treemap versus a pie chart side by side:

Junkcharts_redo_nyt_magnificent7

The reason why the treemap is worse is that both the width and the height of the boxes are changing while only the radius (or angle) of the pie slices is varying. (Not saying use a pie chart, just saying the treemap is worse.)

There is a reason why the designer appended data labels to each of the seven boxes. The effect of not having those labels is readily felt when our eyes reach the next set of stocks – which carry company names but not their market values. What is the market value of Berkshire Hathaway?

Even more so, what proportion of the total is the market value of Berkshire Hathaway? Indeed, if the designer did not write down 29%, it would take a bit of work to figure out the aggregate value of yellow boxes relative to the entire box!

This design sucessfully draws our attention to the structural importance of various components of the whole. There are three layers - the yellow boxes (Magnificent Seven), the gray boxes with company names, and the other gray boxes. I also like how they positioned the text on the right column.

***

Going inside the NYT article itself, we find two line charts that convey the story as told.

Here's the first one:

Nyt_magnificent7_linechart1

They are comparing the most recent stock prices with those from October 12 2022, which is identified as the previous "low". (I'm actually confused by how the most recent "low" is defined, but that's a different subject.)

This chart carries a lot of good information, even though it does not plot "all the data", as in each of the 500 S&P components individually. Over the period under analysis, the average index value has gone up about 35% while the Magnificent Seven's value have skyrocketed by 65% in aggregate. The latter accounted for 30% of the total value at the most recent time point.

If we set the S&P 500 index value in 2024 as 100, then the M7 value in 2024 is 30. After unwinding the 65% growth, the M7 value in October 2022 was 18; the S&P 500 in October 2022 was 74. Thus, the weight of M7 was 24% (18/74) in October 2022, compared to 30% now. Consequently, the weight of the other 473 stocks declined from 76% to 70%.

This isn't even the full story because most of the action within the M7 is in Nvidia, the stock most tightly associated with the current AI hype, as shown in the other line chart.

Nyt_magnificent7_linechart2

Nvidia's value jumped by 430% in that time window. From the treemap, the total current value of M7 is $12.3 b while Nvidia's value is $1.4 b, thus Nvidia is 11.4% of M7 currently. Since M7 is 29% of the total S&P 500, Nvidia is 11.4%*29% = 3% of the S&P. Thus, in 2024, against 100 for the S&P, Nvidia's share is 3. After unwinding the 430% growth, Nvidia's share in October 2022 was 0.6, about 0.8% of 74. Its weight tripled during this period of time.


The choice to encode data using colors

NBC News published the following heatmap that shows inflation by product category in the last year or so:

Nbcnews_inflationtracker

The general story might be that inflation was rampant in airfare and electricity prices about a year ago but these prices have moderated recently, especially in airfare. Gas prices appear to have inflated far less than overall inflation during these months.

***

Now, if you're someone who cares about the magnitude of differences, not just the direction, then revisit the above statements, and you'll feel a sense of inadequacy.

When we choose to encode data in colors, we're giving up on showing magnitudes or precision. The color scale shown up top sends the message that the continuous nature of the number line is being displayed but it really isn't.

The largest value of the chart is found on the left side of the airfare row:

Nbcnews_inflationtracker_highest

The value is about 36% which strangely enough is far larger than the maximum value shown in the legend above. Even if those values align, it is still impossible to guess what values the different colors and shades in the cells map to from the legend.

***

The following small-multiples chart shows the underlying values more precisely:

Redo_junkcharts_nbcnewsinflation

I have transformed the data differently. In these line charts, the data are indexed to the first month (100) so each chart shows the cumulative change in prices from that month to the current month, for each category, compared to the overall.

The two most interesting categories are airfare and gas. Airfare has recently decreased quite drastically relative to September 2022, and thus the line is far below the overall inflation trend. Gas prices moved in reverse: they dropped in the last quarter of 2022 but have steadily risen over 2023, and in the most recent month, is tracking overall inflation.

 

 


Lay off bubbles

Wall Street Journal says that the scale of layoffs in the tech industry recently is worse than those caused by the pandemic lockdown. Here is the chart:

Redo_wsj_tech_layoffs_sufficiency

It's the dreaded bubble chart, complete with overlapping circles. Each bubble represents the total number of employees laid off in the U.S. in a given month.

The above isn't really the chart you find in the Journal. I have removed the two data labels from the chart. Look at the highlighted months of April 2020 and November 2022. Can you guess how much larger is the number of laid-off employees in November 2022 relative to April 2020?

***

If you guessed it's 100% - that the larger bubble is twice the size of the smaller one, then you're much better than I at reading bubble charts. Here is the published chart with the data labels:

Wsj tech layoffs

I like to run this exercise - removing data labels - in order to reveal whether the graphical elements on the page are sufficient to convey the underlying data. Bubbles are typically not great at this. (This is what I call the self-sufficiency test.)

***

Another problem with bubble charts is that the sizes of the bubbles are arbitrary. This allows the designer to convey different messages with the same data.

Take a look at these two bubble charts:

Redo_wsj_layoff_bubbles

The first one has huge bubbles, and lots of overlapping while the second one is roughly the same as the WSJ chart (I pulled a different dataset so the numbers may not be exactly the same).

Both charts are made from exactly the same data! In the second chart, the smallest bubbles are made very small while in the first chart, the smallest bubbles are still quite large.

Think twice before you make a bubble chart.

 


Getting simple charts right

Ian K. submitted this chart on Twitter:

Iankos_chicagocops

The chart comes from a video embedded in this report (link) about Chicago cops leaving their jobs.

Let's start with the basics. This is an example of a simple line chart illustrating a time series of five observations. The vertical axis starts at 10,000 instead of 0. With this choice, the designer wants to focus on the point-to-point change in values, rather than its relation to the initial value.

Every graph has add-ons that assist cognition. On this chart, we have axis labels, gridlines and data labels. Every add-on increases reading time so we should be sparing.

First consider the gridlines. In the following chart, I conduct a self-sufficiency test by removing the data labels from the chart:

Redo_wgn9chicagocops_junkcharts_selfsufficiency

You can see that the last three values present no problems. The first two, especially the first value, are hard to read - because the top gridline is missing! The next chart restores the bounding gridline, so you can see the difference that one small detail can make:

Redo_wgn9chicagocops_junkcharts_addedgridline

***

Next, let's compare the following versions of the chart. The left one contains data labels without gridlines and axis labels. The right one has the gridlines and axis labels but no data labels.

Redo_wgn9chicagocops_gridlinesdatalabels

The left chart prints the entire dataset onto the chart. The reader in essence is reading the raw data. That appears to be the intention of the chart designer as the data labels are in large size, placed inside shiny white boxes. The level of the boxes determines the reader's perception as those catch more of our attention than the dots that actually represent the data.

The right chart highlights the dots and the lines between them. The gridlines are way too thick and heavy so as to distract rather than abet. This chart presumes that the reader isn't that interested in the precise numbers as she is in the trend.

***

As Ian pointed out, one of the biggest problems with this chart is the appearance of even time intervals when all except one of the date values are January. This seemingly innocent detail destroys the chart. The line segments of the chart encodes the pre-post change in the staffing numbers. For most of the line segments, the metric is year-on-year change but the last two line segments on the right show something else: a 19-month change, followed by a 5-month change.

I did the following analysis to understand how big of a staffing problem CPD faces.

Redo_wgn9chicagocops_trendanalysis
First I restored the January 2022 time value, while shifting the Aug 2022 value to its rightful place on the time axis. Next, I added the dashed brown line, which represents a linear extension of the trend seen between January 2020-2021, before the sudden dip. We don't know what the true January 2022 value is but the projected value based on past trend is around 12,200. By August, the projected value is around 11,923, about 300 above the actual value of 11,611. By January 2023, the projected value is almost exactly the same as the actual value.

This linear trending analysis is likely too simplistic but it offers a baseline to start thinking about what the story is. The long-term trend is still down but the apparent dip in 2022 may not be meaningful.