Involuntary head-shaking is probably not an intended consequence of data visualization
Round things, square things

Sorting out the data, and creating the head-shake manual

Yesterday's post attracted a few good comments.

Several readers don't like the data used in the NAEP score chart. The authors labeled the metric "gain in NAEP scale scores" which I interpreted to be "gain scores," a popular way of evaluating educational outcomes. A gain score is the change in test score between (typically consecutive) years. I also interpreted the label "2000-2009" as the average of eight gain scores, in other words, the average year-on-year change in test scores during those 10 years.

After thinking about what reader mankoff wrote, which prompted me to download the raw data, I realized that the designer did not compute gain scores. "2000-2009" really means the difference between the 2009 score and the 2000 score, ignoring all values between those end points. So mankoff is correct in saying that the 2009 number was used in both "2000-2009" and "2009-2015" computations.

This treatment immediately raises concerns. Why is a 10-year period compared to a 7-year period?

Andrew prefers to see the raw scores ("scale scores") instead of relative values. Here is the corresponding chart:


I placed a line at 2009, just to see if there is a reason for that year to be a special year. (I don't think so.) The advantage of plotting raw scores is that it is easier to interpret. As Andrew said, less abstraction. It also soothes the nerves of those who are startled that the lines for white students appear at the bottom of the chart of gain scores.

I suppose the reason why the original designer chose to use score differentials is to highlight their message concerning change in scores. One can nitpick that their message isn't particularly cogent because if you look at 8th grade math or reading scores, comparing 2009 and 2015, there appeared to be negligible change, and yet between those end-points, the scores did spike and then drop back to the 2009 level.

One way to mitigate the confusion that mankoff encountered in interpreting my gain-score graphic is to use "informative" labels, rather than "uninformative" labels.


Instead of saying the vertical axis plots "gain scores" or "change in scores," directly label one end as "no progress" and the other end as "more progress."

Everything on this chart is progress over time, and the stalling of progress is their message. This chart requires more upfront learning, after which the message jumps out. The chart of raw scores shown above has almost no perceptive overhead but the message has to be teased out. I prefer the chart of raw scores in this case.


Let me now address another objection, which pops up every time I convert a bar chart to a line chart (a type of Bumps chart, which has been called slope graphs by Tufte followers). The objection is that the line chart causes readers to see a trend when there isn't one.

So let me make the case one more time.

Start with the original column chart. If you want to know that Hispanic students have seen progress in their 4th grade math scores grind to a halt, you have to shake your head involuntarily in the following manner:


(Notice how the legend interferes with your line of sight.)

By the time you finish interpreting this graphic, you would have shaken your head in all of the following directions:


Now, I am a scavenger. I collect all these lines and rearrange them into four panels of charts. That becomes the chart I showed in yesterday's post. All I have done is to bring to the surface the involuntary motions readers were undertaking. I didn't invent any trends.



Thanks for the clarification on the slope chart. I have one question about this display. Shouldn't there be some white space between column? I find it confusing that one of the vertical lines is 2009-15 when approached from the left, but when leaving that line moving to the right, it is then 2000-09.

Xan Gregg

Just to expand on my twitter comment to recognize that what looks like a straightforward result has a lot of work behind it. On one hand you addressed feedback and completely revised your previous chart. And graphically, the final chart is a combination of several different features: trellising, smoothing, custom coloring, overlays and labeling.

Some of those are individually straightforward in JMP, but not the labels. I'm guessing you added the labels with the annotation tool or in post-production in an image editor.

Now I also notice you erased some of the repeated axis labels for make the result cleaner. I guess that's a general challenge with trellising.

Btw, don't forget to hit the Done button in Graph Builder before you take a screen capture. I can see a little of the drop zone outlines in the corners.


mankoff: You are struggling with the "level of abstraction" that Andrew was complaining about. In the slope chart, we are working at one level higher of abstraction. The data are gain scores (score differentials) and so the slope is the change in score differentials. It is the change of the change in raw scores. Another way to think about it: the slope in that plot represents how the slope of the curve is changing in the raw-score plot (but they only took two samples of this series: they measured the slope between 2000-2009 and then the slope between 2009-2015).

xan: yes, I used the visual elements and titles from the Graph Builder but inserted custom axis labels and line labels. To accommodate this, you'd have to allow some free text and have the space around the chart be responsive to the text. And yes, I should have screen captured after pressing Done.

One issue I encounter with these panel plots is the awkward spacing between the divider of the panel and the edge-most vertical gridline. Ideally, I like those to coincide but almost surely, that leaves no room for the axis label. So I am forced to leave an unsightly gap between the panels. That's why I custom-made those labels.


Kaiser: I understand the slope chart. I'm just continually bothered by an x-axis that cycles. The "vertical lines" I wrote about are not the sloped data lines, but the grid. You have an x-tick marked "2009-15", but it is *also* "2000-09", for "4th grade reading" data. Small multiples, or 3 whitespace segments between each data set, would remove this confusing display.

Neil Schneider

I had another issue with this chart and the perceived analysis it provides. The original author appears to have an agenda about the progress in the most recent years. (Or they are just an idiot, but I am giving them the benefit of doubt.) The gain they showed for reading scores between 2000 and 2009 is almost entirely due to the difference between 2000 and 2003. You can see this in the "raw score" line chart you added above. 2003 is the year "No Child Left Behind"(NCLB) went into effect and this caused a level shift in the test results. The effect was different in different states based on how they did testing prior to NCLB. While the NAEP doesn't highlight this discontinuity, this NCLB evaluation discusses it in their design section: .


Another alternative is to show the change from 2000 for each test/ethnicity, in which case any differences between ethnicities would be obvious.

When I look at this data I find the similarity in shape a bit concerning. I tend to question whether educational changes are applying across all ethnicities, or whether there is some change in the test scaling causes all 3 to change by a similar amount.

The comments to this entry are closed.