A great visual of complicated schedules

Reader Joe D. tipped me about a nice visualization project by a pair of grad students at WPI (link). They displayed data about the Boston subway system (i.e. the T).

The project has many components, one of which is the visualization of the location of every train in the Boston T system on a given day. This results in a very tall chart, the top of which I clipped:

Mbta_viz_1

I recall that Tufte praised this type of chart in one of his books. It is indeed an exquisite design, attributed to Marey. It provides data on both time and space dimensions in a compact manner. The slope of each line is positively correlated with the velocity of the train (I use the word correlated because the distances between stations are not constant as portrayed in this chart). The authors acknowledge the influence of Tufte in their credits, and I recognize a couple of signatures:

  • For once, I like how they hide the names of the intermediate stations along each line while retaining the names of the key stations. Too often, modern charts banish all labels to hover-overs, which is a practice I dislike. When you move the mouse horizontally across the chart, you will see the names of the unnamed stations.
  • The text annotations on the right column are crucial to generating interest in this tall, busy chart. Without those hints, readers may get confused and lost in the tapestry of schedules. If you scroll to the middle, you find an instance of train delay caused by a disabled train. Even with the hints, I find that it takes time to comprehend what the notes are saying. This is definitely a chart that rewards patience.

Clicking on a particular schedule highlights that train, pushing all the other lines into the background. The side panel provides a different visual of the same data, using a schematic subway map.

Mbta_viz_2

 Notice that my mouse is hovering over the 6:11 am moment (represented by the horizontal guide on the right side). This generates a snapshot of the entire T system shown on the left. This map shows the momentary location of every train in the system at 6:11 am. The circled dot is the particular Red Line train I have clicked on before.

This is a master class in linking multiple charts and using interactivity wisely.

***

You may feel that the chart using the subway map is more intuitive and much easier to comprehend. It also becomes very attractive when the dots (i.e., trains) are animated and shown to move through the system. That is the image that project designers have blessed with the top position of their Github page.

However, the image above allows us to  see why the Marey diagram is the far superior representation of the data.

What are some of the questions you might want to answer with this dataset? (The Q of our Trifecta Checkup)

Perhaps figure out which trains were behind schedule on a given day. We can define behind-schedule as slower than the average train on the same route.

It is impossible to figure this out on the subway map. The static version presents a snapshot while the dynamic version has  moving dots, from which readers are challenged to estimate their velocities. The Marey diagram shows all of the other schedules, making it easier to find the late trains.

Another question you might ask is how a delay in one train propagates to other trains. Again, the subway map doesn't show this at all but the Marey diagram does - although here one can nitpick and say even the Marey diagram suffers from overcrowding.

***

On that last question, the project designers offer up an alternative Marey. Think of this as an indiced view. Each trip is indiced to its starting point. The following setting shows the morning rush hour compared to the rest of the day:

Mbta_viz_3

 I think they can utilize this display better if they did not show every single schedule but show the hourly average. Instead of letting readers play with the time scale, they should pre-compute the periods that are the most interesting, which according to the text, are the morning rush, afternoon rush, midday lull and evening lull.

The trouble with showing every line is that the density of lines is affected by the frequency of trains. The rush hours have more trains, causing the lines to be denser. The density gradient competes with the steepness of the lines for our attention, and completely overwhelms it.

***

There really is a lot to savor in this project. You should definitely spend some time reviewing it. Click here.

Also, there is still time to sign up for my NYU chart-making workshop, starting on Saturday. For more information, see here.


Update on Dataviz Workshop 3

My chart making workshop has passed the point where each participant (except one) has presented the first draft of his or her project, and the class has opined on these efforts. Previously, I posted the syllabus of the course here. Also catch up on previous updates (1, 2).

So far, I am very pleased with the results, and importantly, the students have given rave reviews. The in-class discussions have been very constructive, and civil. In every case, the chart designer went home with a few ideas for improvement. The types of issues that came up ranged widely. Here are some examples:

  • Figuring out what the message is in the data set
  • Thinking about what other data can be obtained to clarify the message
  • Discussing the level of detail appropriate for a legend
  • Dealing with data with a large number of small values
  • Because we have a color-blind student, we can examine how charts appear to the color-blind reader
  • How to reduce the complexity of a chart?

As the course draws to a close, several students have expressed an interest in keeping the class together via a meetup group or something similar. I'm thinking about how to accomplish this.

One lesson learned so far is that a few students got stuck trying to restructure the data, and were late submitting their work. I should stress that all submissions in the course are work in process, and maybe I should offer some data processing help during the course.

***

The next workshop will be offered in the summer.

 

PS. Don't miss Andrew Gelman's summary of his graphics tips here.

 

 


Update on Dataviz Workshop 2

The class practised doing critiques on the famous Wind Map by Fernanda Viegas and Martin Wattenberg.

Windmap

Click here for a real-time version of the map.

I selected this particular project because it is a heartless person indeed who does not see the "beauty" in this thing.

Beauty is a word that is thrown around a lot in data visualization circles. What do we mean by beauty?

***

The discussion was very successful and the most interesting points of discussion were these:

  • Something that is beautiful should take us to some truth.
  • If we take this same map but corrupt all the data (e.g. reverse all wind directions), is the map still beautiful?
  • What is the "truth" in this map? What is its utility?
  • The emotional side of beauty is separate from the information side.
  • "Truth" comes before the emotional side of beauty.

Readers: would love to hear what you think.

 

PS. Click here for class syllabus. Click here for first update.


Graph redesign is hot

Joe D., a long time reader, points us to a few blogs that have been active creating redesigns of charts, similar to how we do it here.

First up, here are some examples from Storytelling With Data (link).

This example transformed a grouped bar chart into a line chart, something that I have long advocated. I'm still waiting for the day when market research companies start to switch from bars to lines.

Stwd_Student Makeover 2

***

Jorge Camoes, also a long-time reader, produced a redesign of a chart on military spending first printed in Time magazine. (link)

  Redo_militaryspend

Dual-axis plots have been pilloried here often, especially when the two axes have different and incompatible units, as in here. As usual, transforming to a scatter plot is a good first step, which is what Jorge has done here. He then connected the dots to indicate the time evolution of the relationship. This is a smart move here just because the pattern is so stark.

The chart now illustrates an "inflexion point" in 2000. Prior to 2000, troop size was decreasing while the budget was stable. After 2000, budget increased sharply while troop size remained relatively stable.

Now peer back at the original chart. You can discern the sharp decrease in troop size over time, and the sharp increase in budget over time, but separately. The chart teases a cross-over point around 1995 which turned out to be misleading. This is a great illustration of why dual-axis plots are dangerous.


The state of charting software

Andrew Wheeler took the time to write code (in SPSS) to create the "Scariest Chart ever" (link). I previously wrote about my own attempt to remake the famous chart in grayscale. I complained that this is a chart that is easier to make in the much-maligned Excel paradigm, than in a statistical package: "I find it surprising how much work it would be to use standard tools like R to do this."

Andrew disagreed, saying "anyone saavy with a statistical package would call bs". He goes on to do the "Junk Charts challenge," which has two parts: remake the original Calculated Risk chart, and then, make the Junk Charts version of the chart.

I highly recommend reading the post. You'll learn a bit of SPSS and R (ggplot2) syntax, and the philosophy behind these languages. You can compare and contrast different ways to creating the charts. You can compare the output of various programs to generate the charts.

I'll leave you to decide whether the programs he created are easier than Excel.

***

Unfortunately, Andrew skipped over one of the key challenges that I envision for anyone trying to tackle this problem. The data set he started with, which he found from the Minneapolis Fed, is post-processed data. (It's a credit to him that he found a more direct source of data.) The Fed data is essentially the spreadsheet that sits behind the Calculated Risk chart. One can just highlight the data, and create a plot directly in Excel without any further work.

What I started with was the employment level data from BLS. What such data lacks is the definition of a recession, that is, the starting year and ending year of each recession. The data also comes in calendar months and years, and transforming that to "months from start of recession" is not straightforward. If we don't want to "hard code" the details, i.e. allowing the definition of a recession to be flexible, and make this a more general application, the challenge is more severe.

***

Another detail that Andrew skimmed over is the uneven length of the data series. One of the nice things about the Calculated Risk chart is that each line terminates upon reaching the horizontal axis. Even though more data is available for out years, that part of the time series is deemed extraneous to the story. This creates an awkward dataset where some series have say 25 values and others have only 10 values. While most software packages will handle this, more code needs to be written either during the data processing phase or during the plotting.

By contrast, in Excel, you just leave the cells blank where you want the lines to terminate.

***

In the last section, Andrew did a check on how well the straight lines approximate the real data. You can see that the approximation is extremely well. (The two panels where there seems to be a difference are due to a disagreement between the data as to when the recession started. If you look at 1974 instead of 1973, and also follow Calculated Risk's convention of having a really short recession in 1980, separate from that of 1981, then the straight lines match superbly.)

  Wheeler_JunkChallenge4

***

I'm the last person to say Excel is the best graphing package out there. That's not the point of my original post. If you're a regular reader, you will notice I make my graphs using various software, including R. I came across a case where I think current software packages are inferior, and would like the community to take notice.


Speaking analytics

(This is a cross-post from my other blog, as it also relates to data graphics.)

I was a guest on the Analytically Speaking series, organized by JMP. In this webcast (link, registration required), I talk about the coexistence of data science and statistics, why my blog is called "Junk Charts", what I look for in an analytics team, the tension between visualization and machine algorithms, two modes of statistical modeling, and other things analytical.


Three lessons from Jobs

I feel like I know Steve Jobs even though I don't know him. I know him through the Apple products I have used through the years.


My first exposure to Apple coincided with coming to the States for college. Before the move, I had only ever used PCs, assembled by my Dad. HappymacThe first week of college, I found myself in a room of Macintoshes: in those days, they were off-white cubic blocks, slightly smaller than shoeboxes, with black-and-white, low-resolution screens. A "happy Mac" was always there to greet you. It only took 15 or 20 minutes to fall in love. In this time, I figured out how to use a mouse, the difference between single and double clicking, minimizing windows, file directories, etc. etc. When my friends tell me today that their six-month-old baby could instinctively learn to start their favorite game on the iPad, I believe them. I believe them because I experienced it myself.

By all accounts, Apple products bear the fingerprints of Steve Jobs's dogged vision. His vision offers three important lessons for graphics designers:

1) Never take your eyes off the user experience.

The product is in service of the user. Charts serve readers. What are the key questions to answer? How can we help deliver their needs effortlessly?

2) Maintain the producer's control.

Knowing the user does not mean relinquishing control. Apple products are very tightly designed. The email application on the iPhone works beautifully out of the box but it doesn't try to replicate every feature available online. It doesn't have to. Good graphics are never neutral; their producers have a point of view.

3) Balance form and function.

Distractors often mock Apple for false "innovations": they ask, why should a white iPhone cost more than a black one? how can rainbow-color iPods be considered an innovation? But we all react to beauty, to form. One shouldn't elevate form at the expense of function but function without form is hardly enough. The same holds for graphics.


The return on effort in data graphics

I contributed the following post to the Statistics Forum. They are having a discussion comparing information visualization and statistical graphics. I use the following matrix to classify charts in terms of how much work they make readers do, and how much value readers get out of doing said work.

Returnoneffort

 

To read the rest of it, click here.


Have data graphics progressed in the last century?

Received a wonderful link via reader Lonnie P. to this website that presents a historical reconstruction of W.E.B. DuBois's exhibit of the "American negro" at the 1900 Paris Expo. Amusingly, DuBois presented a large series of data graphics to educate the world on the state (plight) of blacks in America over a century ago.

You can really spend a whole afternoon examining these charts (and more); too bad the charts have poor resolution and it is often hard to make out the details.

***

Judging from this evidence, we must face up to the fact that data graphics have made little progress during these eleven decades. Ideas, good or bad, get reinvented. Disappointingly, we haven't learned from the worst ones.

Exhibit A 

  Dubois_a

(see discussion here)

Exhibit B

Dubois_b

 (see discussion here)

Exhibit C 

  Dubois_c

(See discussion here.)

Exhibit D

Dubois_dd
 (see the Vampire chart here)

Exhibit E

Dubois_e
(see the discussion here.)

Exhibit F

Dubois_f
(see discussion here.)


Audio bookmarks

I look at a fair number of online videos, especially those embedded on blogs. But I haven't seen this feature implemented broadly. It is a wow feature.

Look at the dots above the progress bar: they tell you what topic is being discussed and allow you to jump back and forth between segments. (the particular dot I moused over said "Randy Moss") The video I saw came from this link.

Audio_bookmarks2

This simple-looking feature is immensely useful to users. You can efficiently search through the audio file and find the segments you're interested in. It's like bookmarks students might put on pages of a textbook for easy reference, except these are audio bookmarks.

Why isn't this feature more prevalent? I think it's because of the amount of manual effort needed to set this up. Imagine how the data has to be processed. In the digital age, the audio file is a bunch of bits (ones and zeroes) so no computer or humans will be able to identify topics from data stored in that way. So, someone would need to listen to the audio file, and mark off the segments manually, and tag the segments. Then, the audio bookmarks can be plotted on the progress bar... basically a dot plot with time on the horizontal axis.

In theory, you can train a computer to listen to an audio file and approximate this task. The challenge is to attain the required accuracy so you don't need to hire an army of people to correct mistakes.

A very simple concept but immensely functional. Great job!