Sometimes I wonder if I should just become a chart doctor. Andrew recently wrote that journals should have graphical editors. Businesses also need those, judging from this submission through Twitter (@francesdonald). Link is here.
You don't know whether to laugh or cry at this pie chart:
The author of the article complains that all the tall buildings around the world are cheats: vanity height is defined as the height above which the floors are unoccupied. The sample proportions aren't that different between countries, ranging from 13% to 19% (of the total heights). Why are they added together to make a whole?
The following boxplot illustrates both the average and the variation in vanity heights by region, and tells a more interesting story:
Recall that in a boxplot, the gray box contains the middle 50% of the data and the white line inside the box indicates the median value. UAE has a tendency to inflate the heights more while the other three regions are not much different.
The other graphic included in the same article is only marginally better, despite a much more attractive exterior:
This chart misrepresents the actual heights of the buildings. At first glance, I thought there must be a physical limit to the number of occupied floors since the grayed out sections are equal heights. If the decision has been made to focus on the vanity height, then just don't show the rest of the buildings.
Also, it's okay to assume a minimal intelligence on the part of readers - I mean, is there a need to repeat the "non-occupiable height" label 10 times? Similarly, the use of 10 sets of double asterisks is rather extravagant.
I will be the luncheon speaker at INFORMS NYC on Wednesday in NYC. The talk will provide some context for my new book Numbersense (link), and discuss a few examples from the book. You can pre-register here.
INFORMS is the professional society for Operations Research and Management Science people. For some years, I have attended these regularly and learned a lot from other industry speakers.
If you decide at the last minute, you can pay the $5 extra fee on the day of the talk. Or register now.
Robert Kosara takes us back to the 1940s, and an incredible "infographics" project by the Lawrence Livermoore Laboratory. (link) Here is one of the designs:
When did information graphics turn into ‘infographics,’ and when did we
lose the meticulous, well-researched, information-rich graphics for the
sad waste of pixels that calls itself infographic today?
I think one of the key missing pieces is analytics. Most of today's infographics seemingly are a result of treating data as flowers to be arranged. There is little analytical thinking behind what the data mean. Incidentally, that is why the new NYU certificate is not called Certificate in Data Visualization--we wanted to emphasize the importance of analytics next to datavis.
Also, we have an elective designed for people interested in content marketing. The Livermoore Lab project would fall into this category. So do annual reports for corporations, fundraising prospectuses for non-profit organizations, magazines whether commercial or membership, content for web marketing, etc.
*** The other problem is a kind of perversion of measurement. Because so much of this stuff is online, so many pieces are judged by click rates or bounce rates or time on page. The problem with click rates is well known. Headlines of so many online articles are written solely to create clicks. It's gotten to the point that we feel duped by the headlines.
The design may have originated in print, but in all likelihood, it is also uploaded to the Web; the interaction of readers with the online version is much easier to track than the effect of print, leading to the lazy generalization that the Web response would be "similar to" the print response. This is one of my pet peeves: bad data is worse than no data.
On Twitter, Andy C. (@AnkoNako) asked me to look at this pretty creation at NFL.com (link).
There is a reason why you don't read much about spider charts (web charts, radar charts, etc.) here. While this chart is beautifully constructed, and fun to play with, it just doesn't work as a vehicle for communication.
This example above allows us to compare four players (here, quarterbacks) on eight metrics. Each white polygon represents one player, and the orange outline represents the league average quarterback.
What are some of the questions one might have about comparing quarterbacks?
Who is the best quarterback, and who is the worst?
Who is the better passer? (ignoring other skills, like rushing ability)
Is each quarterback better or worse than the average quarterback?
How will you figure these out from the spider chart?
Not sure. The relative value of the quarterbacks is definitely not encoded in the shape of the polygon, nor the area. To really figure this out, you'd need to look at each of the eight spokes independently, and then aggregate the comparisons in your head. Unless... you are willing to ignore seven of the eight metrics, and just look at passer rating (below right).
Focusing on passing only means focusing on five of the eight metrics, from pass attempts to interceptions. How do you combine five metrics into one evaluation is your own guess.
One can tell that Joe Flacco is basically the average quarterback as his contour is almost exactly that of the average (orange outline). Are the others better or worse thean average? Hard to tell at first glance.
First, the chart invites users to place equal emphasis on each of the eight dimensions. (There is a control to remove dimensions.) But the metrics are clearly not equally important. You certainly should value passing yards more than rushing yards, for example.
Second, the chart ignores the correlation between these eight metrics. The easiest way to see this is the "Passer Rating", which is a formula comprising the Passing Attempts, Passing Completions, Interceptions, Touchdown Passes, and Passing Yards. Yes, all those five components have been separately plotted. Another easy way to see the problem is that Passing Yards are highly correlated with Passing Attempts or Passing Completions.
Third, the chart fails to account for different types of quarterbacks. I deliberately chose these four because Joe Flacco was a starter, Tyrod Taylor was a backup who almost never played, while at San Francisco, Alex Smith and Colin Kaepernick shared the starting duties. So for Passing Yards, the numbers were 3817, 179, 1737 and 1814 respectively. Those numbers should not be directly compared. Better statistics are something like yards per minute played, yards per offensive series, yards per plays executed, etc. The way that this data is used here, all the second- and third-string quarterbacks will be below average and most of the starters will be above average.
From a design perspective, there are a small number of misses.
Mysteriously, the legend always has only two colors no matter how many players are being compared. The orange is labeled Average while the white is labeled "Leader". I have no idea why any of the players should be considered the "Leader".
The only way to know which white polygon represents which player is to hover on the polygon itself. You'll notice that in my example, several of those polygons overlap substantially so sometimes, hovering is not a task easily accomplished.
The last issue is scale. Turns out that some of the metrics like interceptions, touchdown passes, rushing yards, etc. can be zeroes. Take a look at this subset of the chart where I hovered on Tyrrod Taylor.
Do you see the problem? The zero point is definitely not the center of the circle. This problem exists for any circular charts like bubble charts.
Now look at Interceptions. Because the scale is reverse (lower is better), the zero point of this metric will lie on the outer edge of the circle. This is a vexing issue because the radius is open-ended on the outside but closed-ended on the inside.
In the next post, I will discuss some alternative presentation of this data.
I like many aspects of this exercise. This chart displays the results of an experiment conducted by a computer games company to show that the new build ("249") renders frames faster than the older build ("248"). The messages of the chart are clear: the 249 build (blue bars) is substantially faster, over 80% of the frames render in 7 miliseconds or fewer under 249 compared to less than 40% under 248, and less obviously, the variance of frame times is also significantly smaller.
The slight problem is that readers probably have to read the text to grasp most of the above.
In the text, the author explains how to turn time per frame into frame per second, the more common way of measuring rendering speed. The formula is 1000 divided by time per frame. Wouldn't it be better if the chart plots fps directly?
When it comes to presenting distributions (or variability), the cumulative chart is more useful but it also is harder for readers to comprehend. For example:
The beauty of this chart is that one can take any point on the vertical axis, say, 80% level and read off the comparative values of 7 millisecond for the blue line (249) and 10.5 ms for the red (248). That means 80% of the 249 frames were rendered in fewer than 7 ms, relative to 10.5 ms for 248 frames.
Alternatively, taking a point on the horizontal axis, say 5 milliseconds, one can see that about 8% of 248 frames would reach that threshold but 30% of 249 frames did.
The steeper the ascent of the S-curve, the more efficient is the rendering.
Note: The winner of the Book Quiz Round 2 was announced on my book blog. Congratulations to the winners. You can get your own copy of Numbersensehere.
A common advice for anyone living in the U.S. is "read the fine print." If you receive a notice or see an ad, and there is an asterisk or some copy in almost invisible font located at the bottom of the page, you better pull out your magnifying glass.
If you are a data analyst, you better have a magnifying glass in your pocket at all times. One of the recurring themes in Numbersense is that details matter... a lot. This is particularly relevant to Chapters 6 and 7 on economic data.
Last week, on the first Friday of the month, the jobs report came out. For the best reporting on the data itself, with succinct commentary but no hand-waving, I go to Calculated Risk blog.
One of the charts highlighted (in this post) is the unemployment rate by educational attainment. This is the chart that leads to horribly misleading statements saying that the solution to the unemployment crisis is more education. I ranted about this before--see here and here.
Taking this chart at face value, you'd say that the unemployment rate is lower, the more education one has. One can also say that the unemployment rate is less volatile, the more education one has.
Bill makes two succinct comments, basically letting his readers know this chart is next to worthless.
1. Although education matters for the unemployment rate, it doesn't appear
to matter as far as finding new employment - and the unemployment rate
is moving sideways for those with a college degree!
The issue behind this is the "cohort effect". The chart above aggregates everyone from 25 years old and over. This means it treats equally people who just graduated from college last year and people who got their degrees thirty years ago. Why does this matter? A jobs recession hits certain types of people harder than others, and one important determinant is work experience (another would be the industry one works in.) The low unemployment rate for all college graduates masks the challenging job market for recent college graduates. The misinterpretation of this chart leads to wrongheaded policies such as make more college gradutes.
2. This says nothing about the quality of jobs - as an example, a college
graduate working at minimum wage would be considered "employed".
This is where the magnifying glass is critical. You should not assume that your idea of "employed" is the same as the official definition of "employed". Bill raised the issue of minimum wage. Elsewhere, other commentators noted the issue of "part-timers". Part-time employment is not distinguished from full-time employment in the official aggregate statistics.
Taking this further, isn't it plausible that unemployment "trickles down"? As the college graduates grab whatever job they can find, including the minimum-wage ones, they push the high-school graduates out of jobs.
In data, there is often no fine print to be found. In Big Data, this problem is aggravated by a thousand times. Unfortunately, magnifying blank is still blank. So, having the magnifying glass is not enough.
The solution then is to create your own fine print. Spend inordinate amounts of time understanding how data is collected. Dig deeply into how data is defined.
No, this work is not sexy. (PS. If you can't stand it, you really shouldn't be in data science.)
In Chapter 6 of Numbersense, I did this work for you as it relates to jobs data. What I show there is that there is no "right" way to measure employment--it's not as clearcut as you'd like to think. If you were to put forth your definition of "employed" for comment, your definition will absolutely get criticized, just the same way you're criticizing the government's definition.
PS. Larry at Good Stats, Bad Stats pulled out his magnifying glass and wrote a series of posts about education, employment and income. He mildly disagrees with me.
Luck is not easy to nail down in a number. For the fantasy football league, I have a way of looking at luck. One aspect of luck is which team you are matched up with in any given week. There is a matter of facing a stronger or a weaker opponent. There is a different matter of whether you face a given opponent on his/her hot or cold day. Sort of like whether a hitter faces a pitcher on his good or bad day.
As noted before, each FFL player picks nine players out of 14 every week, and those nine earn points. There are typically 200-300 possible choices of nine players. So we can measure how well any FFL owner performs by looking at the points total of the activated squad against the whole distribution of 200-300 options. This was the topic of my earlier post.
Now, if I am lucky, then I tend to face opponents in the weeks in which they perform poorly. And the following chart shows this measure from week to week:
In Week 1, this owner was rather unlucky, in the sense that his opponent pretty much used his best possible squad. On the other hand, in Week 4, his opponent (a different team) played a weak hand, something close to the median squad (in addition, the entire histogram sits on the left side of the chart, meaning that even his opponent's best possible squad this week would have been easy to beat.)
Luck can be measured over the course of the 13 weeks. If the vertical lines tend to show up on the right tails of these histograms, then this owner isn't lucky. On the other hand, if the lines show up mostly on the left half of the histograms, then this owner is lucky.
In Chapter 8 of Numbersense, I use such an analysis to figure out the role of luck. This luck factor turned out to be even more important than the owner's own skills!
Special for Junk Charts readers: here is an excerpt from Chapter 8 (link).
The second book giveaway contest is under way on the sister blog. Enter the contest here.