The what of visualization, beyond the how

A long-time reader sent me the following chart from a Nature article, pointing out that it is rather worthless.

Nautre_scihub

The simple bar chart plots the number of downloads, organized by country, from the website called Sci-Hub, which I've just learned is where one can download scientific articles for free - working around the exorbitant paywalls of scientific journals.

The bar chart is a good example of a Type D chart (Trifecta Checkup). There is nothing wrong with the purpose or visual design of the chart. Nevertheless, the chart paints a misleading picture. The Nature article addresses several shortcomings of the data.

The first - and perhaps most significant - problem is that many Sci-Hub users are expected to access the site via VPN servers that hide their true countries of origin. If the proportion of VPN users is high, the entire dataset is called into doubt. The data would contain both false positives (in countries with VPN servers) and false negatives (in countries with high numbers of VPN users). 

The second problem is seasonality. The dataset covered only one month. Many users are expected to be academics, and in the southern hemisphere, schools are on summer vacation in January and February. Thus, the data from those regions may convey the wrong picture.

Another problem, according to the Nature article, is that Sci-Hub has many competitors. "The figures include only downloads from original Sci-Hub websites, not any replica or ‘mirror’ site, which can have high traffic in places where the original domain is banned."

This mirror-site problem may be worse than it appears. Yes, downloads from Sci-Hub underestimate the entire market for "free" scientific articles. But these mirror sites also inflate Sci-Hub statistics. Presumably, these mirror sites obtain their inventory from Sci-Hub by setting up accounts, thus contributing lots of downloads.

***

Even if VPN and seasonality problems are resolved, the total number of downloads should be adjusted for population. The most appropriate adjustment factor is the population of scientists, but that statistic may be difficult to obtain. A useful proxy might be the number of STEM degrees by country - obtained from a UNESCO survey (link).

A metric of the type "number of Sci-Hub downloads per STEM degree" sounds odd and useless. I'd argue it's better than the unadjusted total number of Sci-Hub downloads. Just don't focus on the absolute values but the relative comparisons between countries. Even better, we can convert the absolute values into an index to focus attention on comparisons.

 


Improving simple bar charts

Here's another bar chart I came across recently. The chart - apparently published by Kaggle - appeared to present challenges data scientists face in industry:

Kaggle

This chart is pretty standard, and inoffensive. But we can still make it better.

Version 1

Redo_kaggle_nodecimals

I removed the decimals from the data labels.

Version 2

Redo_kaggle_noaxislabels

Since every bar is labelled, is anyone looking at the axis labels?

Version 3

Redo_kaggle_nodatalabels

You love axis labels. Then, let's drop the data labels.

Version 4

Redo_kaggle_categories

Ahh, so data scientists struggle with data problems, and people issues. They don't need better tools.


There's more to the composite rating chart

In my previous post, I sketched a set of charts to illustrate composite ratings of maps platforms (e.g. Google Maps, TomTom). Here is the sketch again:

Redo_mapsplatformsratings.002

For those readers who are interested in understanding these ratings beyond the obvious, this set of charts has more to offer.

Take a look first at the two charts on the left hand side.

Redo_junkcharts_autoevolution_ratings_left

Compare the patterns of dots between the two charts. You should note that the Maps Data ratings (blue dots) are less variable than the Platform ratings (green dots).

For Maps Data, the range is from 30 to 85 (out of 110) but the majority of the dots line up around 50.

For Platform, the range is 20 to 70 (out of 90) and the dots are quite spread out within this range.

This means competitiveness based on Platform is more differentiating among these brands than is Maps Data.

In the previous post, I already noted that the other key insight is that the Maps Data values hang quite closely to the overall average ratings while the Platform values are much less correlated.

***

Another informative observation can be found in the bottom row of charts.

The yellow dots (Developer Ecosystem) are mostly to the right of the overall ratings, meaning most of these brands were given scores on Developer Ecosystem that are higher than their average scores.

That is not the case with the green dots (Platform). For this sub-rating, most of the brands score lower than they do in the overall rating.

Redo_junkcharts_autoevolution_ratings_bottom

***

None of these insights are readily learned from the stacked column chart. A key skill in data visualization is whether one can pile on insights without overloading the chart.

 

 


Visualizing composite ratings

A twitter reader submitted the following chart from Autoevolution (link):

Google-maps-is-no-longer-the-top-app-for-navigation-and-offline-maps-179196_1

This is not a successful chart for the simple reason that readers want to look away from it. It's too busy. There is so much going on that one doesn't know where to look.

The underlying dataset is quite common in the marketing world. Through surveys, people are asked to rate some product along a number of dimensions (here, seven). Each dimension has a weight, and combined, the weighted sum becomes a composite ranking (shown here in gray).

Nothing in the chart stands out as particularly offensive even though the overall effect is repelling. Adding the overall rating on top of each column is not the best idea as it distorts the perception of the column heights. But with all these ingredients, the food comes out bland.

***

The key is editing. Find the stories you want to tell, and then deconstruct the chart to showcase them.

I start with a simple way to show the composite ranking, without any fuss:

Redo_junkcharts_autoevolution_top

[Since these are mockups, I have copied all of the data, just the top 11 items.]

Then, I want to know if individual products have particular strengths or weaknesses along specific dimensions. In a ranking like this, one should expect that some component ratings correlate highly with the overall rating while other components deviate from the overall average.

An example of correlated ratings is the Customers dimension.

Redo_junkcharts_autoevolution_customer

The general pattern of the red dots clings closely to that of the gray bars. The gray bars are the overall composite ratings (re-scaled to the rating range for the Customers dimension). This dimension does not tell us more than what we know from the composite rating.

By contrast, the Developers Ecosystem dimension provides additional information.

Redo_junkcharts_autoevolution_developer

Esri, AzureMaps and Mapbox performed much better on this dimension than on the average dimension. 

***

The following construction puts everything together in one package:

Redo_mapsplatformsratings.002


Speaking to the choir

A friend found the following chart about the "carbon cycle", and sent me an exasperated note, having given up on figuring it out. The chart came from a report, and was reprinted in Ars Technica (link).

Gcp_s09_2021_global_perturbation-800x371

The problem with the chart is that the designer is speaking to the choir. One must know a lot about the carbon cycle already to make sense of everything that's going on.

We see big and small arrows pointing up or down. Each arrow has a number attached to it, plus a range inside brackets. These numbers have no units, and it's not obvious what they are measuring.

The arrows come in a variety of colors. The colors are explained by labels but the labels dexcribe apparently unrelated concepts (e.g. fossil CO2 and land-use change).

Interspersed with the arrows is a singular dot. The dot also has a number attached to it. The number wears a plus sign, which signals it's being treated differently than the quantities with up arrows.

The singular dot is an outcast, ostracized from the community of dots in the bottom part of the chart. These dots have labels but no numbers. They come in different sizes but no scale is provided.

The background is divided into three parts, showing the atmosphere, the land mass, and the ocean. The placement of the arrows and dots suggests each measured quantity concerns one of these three parts. Well... except the dot labeled "surface sediments" that sit on the boundary of the land mass and the ocean.

The three-way classification is only one layer of the chart. A different classification is embedded in the color scheme. The gray, light green, and aquamarine arrows in the sky find their counterparts in the dots of the land mass, and the ocean.

What's more, the boundaries between land and sky, and between land and ocean are also painted with those colors. These boundary segments have been given different colors so that the lengths of these segments seem to contain data but we aren't sure what.

At this point, I noticed thin arrows which appear to depict back and forth flows. There may be two types of such exchanges, one indicated by a cycle, the other by two straight arrows in opposite directions. The cycles have no numbers while each pair of straight thin arrows gets two numbers, always identical.

At the bottom of the chart is a annotation in red: "Budget imbalance = -1.0". Presumably some formula ties the numbers shown above to this -1.0 result. We still don't know the units, and it's unclear if -1.0 is a bad number. A negative number shown in red typically indicates a bad number but how bad is it?

Finally, on the top right corner, I found a legend. It's not obvious at first because the legend symbols (arrows and dots) are shown in gray, a color not used elsewhere on the chart. It appears as if it represents another color category. The legend labels do little for me. What is an "anthropogenic flux"? What does the unit of "GtCO2" stand for? Other jargon includes "carbon cycling" and "stocks". The entire diagram is titled "carbon cycle" while the "carbon cycling" thin arrows are only a small part of the diagram.

The bottom line is I have no idea what this chart is saying to me, other than that the earth is a complex system, and that the designer has tried valiantly to impregnate the diagram with lots of information. If I am well read in environmental science, my experience is likely different.

 

 

 

 

 


And you thought that pie chart was bad...

Vying for some of the worst charts of the year, Adobe came up with a few gems in its Digital Trends Survey. This was a tip from Nolan H. on Twitter.

There are many charts that should be featured; I'll focus on this one.

Digitaltrendssurvey2

This is one of those survey questions that allow each respondent to select multiple responses so that adding up the percentages exceeds 100%. The survey asks people which of these futuristic products do they think is most important. There were two separate groups of respondents, consumers (lighter red) and businesses (darker red).

If, like me, you are a left-to-right, top-to-bottom reader, you'd have consumed this graphic in the following way:

Junkcharts_adobedigitaltrends_left2right

The most important item is found in the lower bottom corner while the least important is placed first.

Here is a more sensible order of these objects:

Junkcharts_adobedigitaltrends_big2small

To follow this order, our eyes must do this:

Junkcharts_adobedigitaltrends_big2small_2

Now, let me say I like what they did with the top of the chart:

Junkcharts_adobedigitaltrends_subtitle

Put the legend above the chart because no one can understand it without first reading the legend.

***

Junkcharts_adobedigitaltrends_datadistortionData are embedded into part-circles (i.e. sectors)... but where do we find the data? The most obvious place to look for them is the areas of the sectors. But that's the wrong place. As I show in the explainer, the designer placed the data in the "height" - the distance from the peak point of the object to the horizontal baseline.

As a result of this choice, the areas of the sectors distort the data - they are proportional to the square of the data.

One simple way to figure out that your graphical objects have obscured the data is the self-sufficiency test. Remove all data labels from the chart, and ask if you still have something understandable.

Junkcharts_adobedigitaltrends_sufficiency

With these unusual shapes, it's not easy to judge how much larger is one object from the next. That's why the data labels were included - the readers are looking at the data values, rather than the graphical objects. That's sad, if you are the designer.

***

One last mystery. What decides the layering of the light vs dark red sectors?

Junkcharts_adobedigitaltrends_frontback

This design always places the smaller object in front of the larger object. Recall that the light red is for consumers and dark red for businesses. The comparison between these disjoint segments is not as interesting as the comparison of different ratings of technologies with each segment. So it's unfortunate that this aspect may get more attention than it deserves. It's also a consequence of the chart form. If the light red is always placed in front, then in some panels (such as the middle one shown above), the light red completely blocks the dark red.

 


Re-engineering #onelesspie

Marco tweeted the following pie chart to me (tip from Danilo), which is perfect since today is Pi Day, and I have to do my #onelesspie duty. This started a few years ago with Xan Gregg.

Onelesspie2021

This chart supposedly was published in an engineering journal. I don't have a clue what the question might be that this chart is purportedly answering. Maybe the reason for picking a cellphone?

The particular bits that make this chart hard to comprehend are these:

Junkcharts_onelesspie2021_problems

The chart also fails the ordering rule, as it spreads the largest pieces around.

It doesn't have to be so complicated.

Here is a primitive chart that doesn't even require a graphics software.

Junkcharts_redo_onelesspie2021_1color

Younger readers have not experienced the days (pre 2000) when color printing was at a premium, and most graphics were grayscale. Nevertheless, restrained use of color is recommended.

Junkcharts_redo_onelesspie2021_2colors

Happy Pi Day!


Avoid concentric circles

A twitter follower sent me this chart by way of Munich:

Msc_staggereddonut

The logo of the Munich Security Conference (MSC) is quite cute. It looks like an ear. Perhaps that inspired this, em, staggered donut chart.

I like to straighten curves out so the donut chart becomes a bar chart:

Redo_junkcharts_msc_germanallies_distortion

The blue and gray bars mimic the lengths of the arcs in the donut chart. The yellow bars show the relative size of the underlying data. You can see that three of the four arcs under-represent the size of the data.

Why is that so? It's due to the staggering. Inner circles have smaller circumferences than outer circles. The designer keeps the angles the same so the arc lengths have been artificially reduced.

Junkcharts_redo_munichgermanallies_donuts

***

The donut chart is just a pie chart with a hole punched in the middle. For both pie charts and donut charts, the data are encoded in the angles at the center of the circle. Under normal circumstances, pie charts can also be read by comparing sector areas, and donut charts using arc lengths, as those are proportional to the angles.

The area and arc interpretation fails when the designer alters the radii of the sections. Look at the following pair of pie charts, produced by filling the hole in the above donuts:

Junkcharts_redo_munichgermanallies_pies

The staggered pie chart distorts the data if the reader compares areas but not so if the reader compares angles at the center. The pie chart can be read both ways so long as the designer does not alter the radii.

 


Election visuals 4: the snake pit is the best election graphic ever

This is the final post on the series of data visualization deployed by FiveThirtyEight to explain their election forecasting model. The previous posts are here, here and here.

I'm saving the best for last.

538_snakepit

This snake-pit chart brings me great joy - I wish I came up with it!

This chart wins by focusing on a limited set of questions, and doing so excellently. As with many election observers, we understand that the U.S. presidential election will turn on so-called "swing states," and the candidates' strength in these swing states are variable, as the name suggests. Thus, we like to know which states are in play, and within these states, which ones are most unpredictable.

This chart lines up all the states from the reddest of red up top to the bluest of blue at the bottom. Each state is ranked by the voting margin predicted by 538's election forecasting model. The swing states are found in the middle.

Since each state confers a fixed number of electoral votes, and a candidate must amass 270 to win, there is a "tipping" state. In the diagram above, it's Pennsylvania. This pivotal state is neatly foregrounded as the one crossing the line in the middle.

The lengths of the segments correspond to the number of electoral votes and so do not change with the data. What change are the sequencing of the segments, and the color shading.

This data visualization is a gem of visual story-telling. The form lends itself to a story.

***

The snake-pit chart succeeds by not doing too much. There are many items that the chart does not directly communicate.

The exact number of electoral votes by state is not explicit, nor is it easy to compare the lengths of bending segments. The color scale for conveying the predicted voting margins is crude, and it's not clear what is the difference between a deep color and a light color. It's also challenging to learn the electoral vote split; the actual winning margin is not even stated.

The reality is the average reader doesn't care. I got everything I wanted from the chart, and I ain't got the time to explore every state.

There is a hover-over effect that reveals some of the additional information:

538_snakepitchart_detail

One can keep going on. I have no idea how the 40,000 scenarios presented in the other graphics in this series have been reduced to the forecast shown in the inset. But again, those omissions did not lessen my enjoyment. The point is: let your graphics breathe.

***

I'm thinking of potential variations even though I'm fully satisfied with this effort.

I wonder if the color shading should be reversed. The light shading encodes a smaller voting margin, which indicates a tighter race. But our attention is typically drawn first to the darker shades. If the shading scheme is reversed, the color should be described as how tight the race is.

I also wonder if a third color (purple) should be introduced. Doing so would require the editors to make judgment calls on which set of states are swing states.

One strange thing about election day is the specific sequence of when TV stations (!) call the state results, which not only correlates with voting margin but also with time zones. I wonder if the time zone information can be worked into the sequencing of segments.

Let me know what you think of these ideas, or leave your own ideas, in the comments below.

***

I have already praised this graphic when it first came out in 2016. (link)

A key improvement is tilting the chart, which avoids vertical state labels.

The previous post was written around election day 2016. The snake pit further cements its status as a story-telling device. As states are called, they are taken out of the picture. So it works very well as a dynamic chart on election day.

I'm nominating this snake-pit chart as the best election graphic ever. Kudos to the FiveThirtyEight team.


Election visuals 2: informative and playful

In yesterday's post, I reviewed one section of 538's visualization of its election forecasting model, specifically, the post focuses on the probability plot visualization.

The visualization, technically called  a pdf, is a mainstay of statistical graphics. While every one of 40,000 scenarios shows up on this chart, it doesn't offer a direct answer to our topline question. What is Nate's call at this point in time? Elsewhere in their post, we learn that the 538 model currently gives Biden a 75% chance of winning, thrice that of Trump's.

538_pdf_pair

In graphical terms, the area to the right of the 270-line is three times the size of the left area (on the bottom chart). That's not apparent in the pdf representation. Addressing this, statisticians may convert the pdf into a cdf, which depicts the cumulative area as we sweep from the left to the right along the horizontal axis.  

The cdf visualization rarely leaves the pages of a scientific journal because it's not easy for a novice to understand. Not least because the relevant probability is 1 minus the cumulative probability. The cdf for the bottom chart will show 25% at the 270-line while the chance of Biden winning is 1 - 25% = 75%.

The cdf presentation is also wasteful for the election scenario. No one cares about any threshold other than the 270 votes needed to win, but the standard cdf shows every possible threshold.

The second graphical concept in the 538 post (link) is an attempt to solve this problem.

538_dotplot

If you drop all the dots to an imaginary horizontal baseline, the above dotplot looks like this:

Redo_junkcharts_538electionforecast_dotplot_1

There is a recent trend toward centering dots to produce symmetry. It's actually harder to perceive the differences in heights of the band.

The secret sauce is to put down 100 dots, with a 75-25 blue-red split that conveys the 75% chance of a Biden win. Imposing the pdf line from the other visualization, I find that the density of dots roughly mimics the probability of outcomes.

Redo_junkcharts_538electionforecast_dotplot_2

It's easier to estimate the blue vs red areas using those dots than the lines.

The dots are stuffed toys. Clicking on each dot reveals a map showing one of the 40,000 scenarios. It displays which candidate wins which state. For example, the most extreme example of a Trump win is:

538_dotplot_redextreme

Here is a scenario of a razor-tight election won by Trump:

538_dotplot_redmiddle

This presentation has a weakness as well. It gives the impression that each of the dots is equally important because they are the same size. In reality, the importance of each dot is proportional to the height of the band. Since the band is generally wider near the middle, the dots near the middle are more likely scenarios than the dots shown on the two edges.

On balance, I like this visualization that is both informative and playful.

As before, what strikes me about the simulation result is the flatness of the probability surface. This feature is obscured when we summarize the result as 75% chance of a Biden victory.