What doesn't help readers (on the chart) and what does help (off the chart)
Jun 16, 2016
Via Twitter, Bart S (@BartSchuijt) sent me to this TechCrunch article, which contains several uninspiring charts.
The most disturbing one is this:
There is a classic Tufte class here: only five numbers and yet the chart is so confusing. And yes, they reversed the axis. Lower means higher "app abandonment" and higher means lower "app abandonment". The co-existence of the data labels, gridlines, and axis labels increases processing time without adding information.
A simple column chart shows there is almost nothing going on:
I suspect that if they were to break the data down by months and weeks, it would be clear that the fluctuations are meaningless.
***
The graphical scaffolding, or what Tufte calls the non-data ink, should provide context to help readers understand the data. This is not the case here.
Worse, the context needed to interpret "app abandonment" is sorely missing.
You might argue with me. Isn't it clear from the chart title? And doesn't the subtitle provide the details of how app abandonment is measured? It says "% of users who abandon an app after one use".
That definition is an emperor with no clothes.
The five numbers could not really be percentages of users because every user has many apps. So one may abandon app A after a single use, but one may also have used app B four times, and app C 12 times, etc.
It seems possible that they are counting user-app pairs. This measure is much harder to interpret because every user is represented as many times as he/she has apps. The more apps he/she has, the more times he/she is represented in the data.
And be careful, we are not counting all apps either. For the definition to make sense, we should be counting only apps that are downloaded in the given year. This means that lurking behind the time series is the proportion of "new" apps and how this evolved over time. It is also murky what "new" means. I am aware that many app developers keep forcing users to download upgraded apps - sometimes, I think these are counted when developers publish app download statistics. Obviously, someone who upgrades an app is likely to be an active user. So whether upgrades or later versions of the same app are counted or not is another question.
Finally, what constitutes a "use"?
***
From a Trifecta perspective, this is a Type DV chart. There are obvious visual flaws but the real issue is the missing context related to how the metric is defined.
What they seem to have missed is that apps are a relatively low priced item, and so people buy the cheaper ones on a whim, so it would be useful to look at the figures at various price points. Especially when many apps are free with in app purchases so it is quite reasonable that they are only used once.
Posted by: Ken | Jun 17, 2016 at 06:34 AM