There's more to the composite rating chart

In my previous post, I sketched a set of charts to illustrate composite ratings of maps platforms (e.g. Google Maps, TomTom). Here is the sketch again:

Redo_mapsplatformsratings.002

For those readers who are interested in understanding these ratings beyond the obvious, this set of charts has more to offer.

Take a look first at the two charts on the left hand side.

Redo_junkcharts_autoevolution_ratings_left

Compare the patterns of dots between the two charts. You should note that the Maps Data ratings (blue dots) are less variable than the Platform ratings (green dots).

For Maps Data, the range is from 30 to 85 (out of 110) but the majority of the dots line up around 50.

For Platform, the range is 20 to 70 (out of 90) and the dots are quite spread out within this range.

This means competitiveness based on Platform is more differentiating among these brands than is Maps Data.

In the previous post, I already noted that the other key insight is that the Maps Data values hang quite closely to the overall average ratings while the Platform values are much less correlated.

***

Another informative observation can be found in the bottom row of charts.

The yellow dots (Developer Ecosystem) are mostly to the right of the overall ratings, meaning most of these brands were given scores on Developer Ecosystem that are higher than their average scores.

That is not the case with the green dots (Platform). For this sub-rating, most of the brands score lower than they do in the overall rating.

Redo_junkcharts_autoevolution_ratings_bottom

***

None of these insights are readily learned from the stacked column chart. A key skill in data visualization is whether one can pile on insights without overloading the chart.

 

 


Visualizing composite ratings

A twitter reader submitted the following chart from Autoevolution (link):

Google-maps-is-no-longer-the-top-app-for-navigation-and-offline-maps-179196_1

This is not a successful chart for the simple reason that readers want to look away from it. It's too busy. There is so much going on that one doesn't know where to look.

The underlying dataset is quite common in the marketing world. Through surveys, people are asked to rate some product along a number of dimensions (here, seven). Each dimension has a weight, and combined, the weighted sum becomes a composite ranking (shown here in gray).

Nothing in the chart stands out as particularly offensive even though the overall effect is repelling. Adding the overall rating on top of each column is not the best idea as it distorts the perception of the column heights. But with all these ingredients, the food comes out bland.

***

The key is editing. Find the stories you want to tell, and then deconstruct the chart to showcase them.

I start with a simple way to show the composite ranking, without any fuss:

Redo_junkcharts_autoevolution_top

[Since these are mockups, I have copied all of the data, just the top 11 items.]

Then, I want to know if individual products have particular strengths or weaknesses along specific dimensions. In a ranking like this, one should expect that some component ratings correlate highly with the overall rating while other components deviate from the overall average.

An example of correlated ratings is the Customers dimension.

Redo_junkcharts_autoevolution_customer

The general pattern of the red dots clings closely to that of the gray bars. The gray bars are the overall composite ratings (re-scaled to the rating range for the Customers dimension). This dimension does not tell us more than what we know from the composite rating.

By contrast, the Developers Ecosystem dimension provides additional information.

Redo_junkcharts_autoevolution_developer

Esri, AzureMaps and Mapbox performed much better on this dimension than on the average dimension. 

***

The following construction puts everything together in one package:

Redo_mapsplatformsratings.002


Best chart I have seen this year

Marvelling at this chart:

 

***

The credit ultimately goes to a Reddit user (account deleted). I first saw it in this nice piece of data journalism by my friends at System 2 (link). They linked to Visual Capitalism (link).

There are so many things on this one chart that makes me smile.

The animation. The message of the story is aging population. Average age is moving up. This uptrend is clear from the chart, as the bulge of the population pyramid is migrating up.

The trend happens to be slow, and that gives the movement a mesmerizing, soothing effect.

Other items on the chart are synced to the time evolution. The year label on the top but also the year labels on the right side of the chart, plus the counts of total population at the bottom.

OMG, it even gives me average age, and life expectancy, and how those statistics are moving up as well.

Even better, the designer adds useful context to the data: look at the names of the generations paired with the birth years.

This chart is also an example of dual axes that work. Age, birth year and current year are connected to each other, and given two of the three, the third is fixed. So even though there are two vertical axes, there is only one scale.

The only thing I'm not entirely convinced about is placing the scroll bar on the very top. It's a redundant piece that belongs to a less prominent part of the chart.


Think twice before you spiral

After Nathan at FlowingData sang praises of the following chart, a debate ensued on Twitter as others dislike it.

Nyt_spiral_covidcases

The chart was printed in an opinion column in the New York Times (link).

I have found few uses for spiral charts, and this example has not changed my mind.

The canonical time-series chart is like this:

Junkcharts_redo_nyt_covidcasesspiral_1

 

***

The area chart takes no effort to understand. We can see when the peaks occurred. We notice that the current surge is already double the last peak seen a year ago.

It's instructive to trace how one gets from the simple area chart to the spiral chart.

Junkcharts_redo_nyt_covidcasesspiral_2

Step 1 is to center the area on the zero baseline, instead of having the zero baseline as the baseline. While this technique frequently makes for a more pleasant visual (because of our preference for symmetry), it actually makes it harder to see the trend over time. Effectively, any change is split in half, which is why the envelope of the area is less sharp.

Junkcharts_redo_nyt_covidcasesspiral_3

In Step 2, I massively compress the vertical scale. That's because when you plot a spiral, you are forced to fit each cycle of data into a much shorter range. Such compression causes the year on year doubling of cases to appear less dramatic. (Actually, the aspect ratio is devastated because while the vertical scale is hugely compressed, the horizontal scale is dramatically stretched out due to the curled up design)

Junkcharts_redo_nyt_covidcasesspiral_4

Step 3 may elude your attention. If you simply curl up the compressed, centered area chart, you don't get the spiral chart. The key is to ask about the radius of the spiral. As best I can tell, the radius has no meaning; it is gradually increased so that each year of data has its own "orbit". What would the change in radius translate to on our non-circular chart? It should mean that the center of the area is gradually lifted away from the zero line. On the right chart, I mimic this effect (I only measured the change in radius every 3 months so the change is more angular than displayed in the spiral chart.) The problem I have with this Step is that it serves no purpose, while it complicates cognition,

In Step 4, just curl up the object into a ball based on aligning months of the year.

Junkcharts_redo_nyt_covidcasesspiral_5

This is the point when I realized I missed a Step 2B. I carefully aligned the scales of both charts so that the 150K cases shown in the legend on the right have the same vertical representation as on the left. This exposes a severe horizontal rescaling. The length of the horizontal axis on the left chart is many times smaller than the circumference of the spiral! That's why earlier, I said one of the biggest feature of this spiral chart is that it imposes a dubious aspect ratio, that is extremely wide and extremely short.

As usual, think twice before you spiral.

 

 


Visual design is hard, brought to you by NYC subway

This poster showed up in a NY subway train recently.

Rootin-sm

Visual design is hard!

What is the message? The intention is, of course, to say Rootine is better than others. (That's the Q corner, if you're following the Trifecta Checkup.)

What is the visual telling us (V corner)? It says Rootine is yellow while Others are purple. What do these color mean? There is no legend to help decipher it. And yellow-purple doesn't have a canonical interpretation (unlike say, red-green). In theory, purple can be better than yellow.

The other mystery is the black dot on the fifth item. (This is the NYC subway so the poster could have been vandalized.) It could mean "diet + lifestyle analyzed" is a unique feature of Rootine, not available on any other platform. That implies purple to mean available but not as effective, which significantly lessnes the impact of the chart.

***

Finally, let's imagine the data that may exist to support this chart.

The aggregation of all competitors to "Others" imposes a major challenge. If yellow means yes, and purple means no, we'd expect few if any purple dots because across all competitors, there is a good chance that at least one of them has a particular feature.

Next, I'm dubious about the claim of "precision dosed, unique to you". I'm imagining they are selling some kind of medicine or health food, which can be "dosed". Predictive modelers like to market their models as "personalized," unique to each person but such a thing is impractical. Before you start using their products, they have no data on you, or your response to those products. How could the recommendation be "precision dosed, unique to you"?

Even if you've used the product for a while, it will be tough to achieve a good level of optimality with so little data. In fact, given that your past data are used to generate actions intended to improve your health - that is to say, to cause the future data to diverge from the past data, how do you know that any change you observe next period is caused by the actions you took? The pre-post difference is both affected by temporal shifts and the actions you've taken. If the next period's metric improves, you may want to believe that the actions worked. If the next period's metric declines, are you willing to conclude that the actions you took backfired?

"Formulas improve with you". This makes me more worried than relieved.

***

Problems like these can be solved by showing our work to others. Sometimes, we're too immersed in our own world we don't see we have left off key information.

 

 


Start at zero, or start at wherever

Andrew's post about start-at-zero helps me refine my own thinking on this evergreen topic.

The specific example he gave is this one:

Andrewgelman_invitezeroin

The dataset is a numeric variable (y) with values over time (x). The minimum numeric value is around 3 and the range of values is from around 3 to just above 20. His advice is "If zero is in the neighborhood, invite it in". (Link)

The rule, as usual, sounds simpler than it really is. In the discussion, Andrew highlights several considerations.

Is zero a meaningful reference value? In his example, we assume it is and so we invite zero in. But, as Andrew also says, if zero is meaningless, then recall the invitation. So context must be accounted for.

In Chapter 1 of Numbersense (link), I looked at some SAT score data of applicants to competitive colleges. Is zero a meaningful reference value for SAT scores? Someone might argue yes, since it is the theoretical minimum score that anyone could get from the test. Any statistician will likely say no, since a competitive college will have never seen an applicant submitting a score of zero, or anywhere close to zero. Thus, starting such a chart at zero inserts a lot of whitespace and draws attention to a useless insight - how far above the theoretical worst performer is someone's score.

***

What about the left panel of Andrew's chart makes us uncomfortable? I ask myself this question. My answer is that the horizontal axis highlights an arbitrary value that distracts from the key patterns of the data.

As shown below, the arbitrary value is ~2.5. This is utterly meaningless.

Redo_andrewgelman_invitezeroin

What if 0 is also a meaningless value for this dataset? I'd recommend "bench the axis". Like this:

Redo_andrewgelman_benchtheaxis

An axis is a tool to help readers understand a chart. If it isn't serving a function, an axis doesn't need to be there. When I choose a line chart for time-series data, I'm drawing attention to temporal change in the numeric values, or the range of values. I'm not saying something about the values relative to some reference number.

From this example, we also see that the horizontal axis should not be regarded as a hanger for time labels. Time labels can exist by themselves.

 

 


Getting to first before going to second

Happy holidays to all my readers! A special shutout to those who've been around for over 15 years.

***

The following enhanced data table appeared in Significance magazine (August 2021) under an article titled "Winning an election, not a popularity contest" (link, paywalled)

Sig_electoralcollege-smIt's surprising hard to read and there are many reasons contributing to this.

First is the antiquated style guide of academic journals, in which they turn legends into text, and insert the text into a caption. This is one of the worst journalistic practices that continue to be followed.

The table shows 50 states plus District of Columbia. The authors are interested in the extreme case in which a hypothetical U.S. presidential candidate wins the electoral college with the lowest possible popular vote margin. If you've been following U.S. presidential politics, you'd know that the electoral college effectively deflates the value of big-city votes so that the electoral vote margin can be a lot larger than the popular vote margin.

The two sub-tables show two different scenarios: Scenario A is a configuration computed by NPR in one of their reports. Scenario B is a configuration created by the authors (Leinwand, et. al.).

The table cells are given one of four colors: green = needed in the winning configuration; white = not needed; yellow = state needed in Scenario B but not in Scenario A; grey = state needed in Scenario A but not in Scenario B.

***

The second problem is that the above description of the color legend is not quite correct. Green, it turns out, is only correctly explained for Scenario A. Green for Scenario B encodes those states that are needed for the candidate to win the electoral college in Scenario B minus those states that are needed in Scenario B but not in Scenario A (shown in yellow). There is a similar problem with interpreting the white color in the table for Scenario B.

To fix this problem, start with the Q corner of the Trifecta Checkup.

_trifectacheckup_image

The designer wants to convey an interlocking pair of insights: the winning configuration of states for each of the two scenarios; and the difference between those two configurations.

The problem with the current design is that it elevates the second insight over the first. However, the second insight is a derivative of the first so it's hard to get to the second spot without reaching the first.

The following revision addresses this problem:

Redo_sig_electoralcollege_corrected

[12/30/2021: Replaced chart and corrected the blue arrow for NJ.]

 

 


Graphing highly structured data

The following sankey diagram appeared in my Linkedin feed the other day, and I agree with the poster that this is an excellent example.

Spotify_revenue_sankey

It's an unusual use of a flow chart to show the P&L (profit and loss) statement of a business. It makes sense since these are flows of money. The graph explains how Spotify makes money - or how little profit it claims to have earned on over 2.5 billion of revenues.

What makes this chart work so well?

The first thing to notice is how they handled negative flows (costs). They turned the negative numbers into positive numbers, and encoded the signs of the numbers as colors. This doesn't come as naturally as one might think. The raw data are financial tables with revenues shown as positive numbers and costs shown as negative numbers, perhaps in parentheses. Like this:

Profit_Loss_QlikView

Now, some readers are sure to have an issue with using the red-green color scheme. I suppose gray-red can be a substitute.

The second smart decision is to pare down the details. There are only four cost categories shown in the entire chart. The cost of revenue represents more than two-thirds of all revenues, and we know nothing about sub-categories of this cost.

The third feature is where the Spotify logo is placed. This directs our attention to the middle of the diagram. This is important because typically on a sankey diagram you read from left to right. Here, the starting point is really the column labeled "total Spotify revenue". The first column just splits the total revenue between subscription revenue and advertising revenue.

Putting the labels of the last column inside the flows improves readability as well.

On the whole, a job well done.

***

Sankey diagrams have limitations. The charts need to be simple enough to work their magic.

It's difficult to add a time element to the above chart, for example. The next question a business analyst might want to ask is how the revenue/cost/profit structure at Spotify have changed over time.

Another question a business analyst might ask is the revenue/cost/profit structure of premium vs ad-supported users. We have a third of the answer - the revenue split. Depending on relative usage, and content preference, the mix of royalties is likely not to replicate the revenue split.

Yet another business analyst might be interested in comparing Spotify's business model to a competitor. It's also not simple to handle this on a sankey diagram.

***

I searched for alternative charts, and when you look at what's out there, you appreciate the sankey version more.

Here is a waterfall chart, which is quite popular:

Profit_loss_waterfall

Here is a stacked column chart, rooted at zero:

Profit_loss_hangingcolumns

Of course, someone has to make a pie chart - in this case, two pie charts:

Profit_loss_piechart

 

 

 

 


How does the U.K. vote in the U.N.?

Through my twitter feed, I found my way to this chart, made by jamie_bio.

Jamie_bio_un_votes25032021

This is produced using R code even though it looks like a slide.

The underlying dataset concerns votes at the United Nations on various topics. Someone has already classified these topics. Jamie looked at voting blocs, specifically, countries whose votes agree most often or least often with the U.K.

If you look at his Github, this is one in a series of works he produced to hone his dataviz skills. Ultimately, I think this effort can benefit from some re-thinking. However, I also appreciate the work he has put into this.

Let's start with the things I enjoyed.

Given the dataset, I imagine the first visual one might come up with is a heatmap that shows countries in rows and topics in columns. That would work ok, as any standard chart form would but it would be a data dump that doesn't tell a story. There are almost 200 countries in the entire dataset. The countries can only be ordered in one way so if it's ordered for All Votes, it's not ordered for any of the other columns.

What Jamie attempts here is story-telling. The design leads the reader through a narrative. We start by reading the how-to-read-this box on the top left. This tells us that he's using a lunar eclipse metaphor. A full circle in blue indicates 0% agreement while a full circle in white indicates 100% agreement. The five circles signal that he's binning the agreement percentages into five discrete buckets, which helps simplify our understanding of the data.

Then, our eyes go to the circle of circles, labelled "All votes". This is roughly split in half, with the left side showing mostly blue and the right showing mostly white. That's because he's extracting the top 5 and bottom 5 countries, measured by their vote alignment with the U.K. The countries names are clearly labelled.

Next, we see the votes broken up by topics. I'm assuming not all topics are covered but six key topics are highlighted on the right half of the page.

What I appreciate about this effort is the thought process behind how to deliver a message to the audience. Selecting a specific subset that addresses a specific question. Thinning the materials in a way that doesn't throw the kitchen sink at the reader. Concocting the circular layout that presents a pleasing way of consuming the data.

***

Now, let me talk about the things that need more work.

I'm not convinced that he got his message across. What is the visual telling us? Half of the cricle are aligned with the U.K. while half aren't so the U.K. sits on the fence on every issue? But this isn't the message. It's a bit of a mirage because the designer picked out the top 5 and bottom 5 countries. The top 5 are surely going to be voting almost 100% with the U.K. while the bottom 5 are surely going to be disagreeing with the U.K. a lot.

I did a quick sketch to understand the whole distribution:

Redo_junkcharts_ukvotes_overview_2

This is not intended as a show-and-tell graphic, just a useful way of exploring the dataset. You can see that Arms Race/Disarmament and Economic Development are "average" issues that have the same form as the "All issues" line. There are a small number of countries that are extremely aligned with the UK, and then about 50 countries that are aligned over 50% of the time, then the other 150 countries are within the 30 to 50% aligned. On human rights, there is less alignment. On Palestine, there is more alignment.

What the above chart shows is that the top 5 and bottom 5 countries both represent thin slithers of this distribution, which is why in the circular diagrams, there is little differentiation. The two subgroups are very far apart but within each subgroup, there is almost no variation.

Another issue is the lunar eclipse metaphor. It's hard to wrap my head around a full white circle indicating 100% agreement while a full blue circle shows 0% agreement.

In the diagrams for individual topics, the two-letter acronyms for countries are used instead of the country names. A decoder needs to be provided, or just print the full names.

 

 

 

 

 

 


To explain or to eliminate, that is the question

Today, I take a look at another project from Ray Vella's class at NYU.

Rich Get Richer Assigment 2 top

(The above image is a honeypot for "smart" algorithms that don't know how to handle image dimensions which don't fit their shadow "requirement". Human beings should proceed to the full image below.)

As explained in this post, the students visualized data about regional average incomes in a selection of countries. It turns out that remarkable differences persist in regional income disparity between countries, almost all of which are more advanced economies.

Rich Get Richer Assigment 2 Danielle Curran_1

The graphic is by Danielle Curran.

I noticed two smart decisions.

First, she came up with a different main metric for gauging regional disparity, landing on a metric that is simple to grasp.

Based on hints given on the chart, I surmised that Danielle computed the change in per-capita income in the richest and poorest regions separately for each country between 2000 and 2015. These regional income growth values are expressed in currency, not indiced. Then, she computed the ratio of these growth rates, for each country. The end result is a simple metric for each country that describes how fast income has been growing in the richest region relative to the poorest region.

One of the challenges of this dataset is the complex indexing scheme (discussed here). Carlos' solution keeps the indices but uses design to facilitate comparisons. Danielle avoids the indices altogether.

The reader is relieved of the need to make comparisons, and so can focus on differences in magnitude. We see clearly that regional disparity is by far the highest in the U.K.

***

The second smart decision Danielle made is organizing the countries into clusters. She took advantage of the horizontal axis which does not encode any data. The branching structure places different clusters of countries along the axis, making it simple to navigate. The locations of these clusters are cleverly aligned to the map below.

***

Danielle's effort is stronger on communications while Carlos' effort provides more information. The key is to understand who your readers are. What proportion of your readers would want to know the values for each country, each region and each year?

***

A couple of suggestions

a) The reference line should be set at 1, not 0, for a ratio scale. The value of 1 happens when the richest region and the poorest region have identical per-capita incomes.

b) The vertical scale should be fixed.