Two nice examples of interactivity

Janie on Twitter pointed me to this South China Morning Post graphic showing off the mighty train line just launched between north China and London (!)

Scmp_chinalondonrail

Scrolling down the page simulates the train ride from origin to destination. Pictures of key regions are shown on the left column, as well as some statistics and other related information.

The interactivity has a clear purpose: facilitating cross-reference between two chart forms.

The graphic contains a little oversight ... The label for the key city of Xian, referenced on the map, is missing from the elevation chart on the left here:

Scmp_chinalondonrail_xian

 ***

I also like the way New York Times handled interactivity to this chart showing the rise in global surface temperature since the 1900s. The accompanying article is here.

Nyt_surfacetemp

When the graph is loaded, the dots get printed from left to right. That's an attention grabber.

Further, when the dots settle, some years sink into the background, leaving the orange dots that show the years without the El Nino effect. The reader can use the toggle under the chart title to view all of the years.

This configuration is unusual. It's more common to show all the data, and allow readers to toggle between subsets of the data. By inverting this convention, it's likely few readers need to hit that toggle. The key message of the story concerns the years without El Nino, and that's where the graphic stands.

This is interactivity that succeeds by not getting in the way. 

 

 

 


Excellent visualization of gun violence in American cities

I like the Guardian's feature (undated) on gun violence in American cities a lot.

The following graphic illustrates the situation in Baltimore.

Guardian_gunviolence_baltimore

The designer starts by placing where the gun homicides occured in 2015. Then, it leads readers through an exploration of the key factors that might be associated with the spatial distribution of those homicides.

The blue color measures poverty levels. There is a moderate correlation between high numbers of dots (homicides) and deeper blue (poorer). The magenta color measures education attainment and the orange color measures proportion of blacks. In Baltimore, it appears that race is substantially better at explaining the prevalence of homicides.

This work is exemplary because it transcends description (first map) and explores explanations for the spatial pattern. Because three factors are explored together in a small-multiples layout, readers learn that no single factor can explain everything. In addition, we learn that different factors have different degrees of explanatory power.

Attentive readers will also find that the three factors of poverty, education attainment and proportion black are mutually correlated.  Areas with large black populations also tend to be poorer and less educated.

***

I also like the introductory section in which a little dose of interactivity is used to sequentially present the four maps, now superimposed. It then becomes possible to comprehend the rest quickly.

Guardian_guncrimemaps_stlouis_2

 ***

The top section is less successful as proportions are not easily conveyed via dot density maps.

Guardian_guncrime_map_prop

Dropping the map form helps. Here is a draft of what I have in mind. I just pulled some data from online sources at the metropolitan area (MSA) level, and it doesn't have as striking a comparison as the city-level data, it seems.

Redo_guardiangundeathsprop

 

 PS. On Twitter, Aliza tells me the article was dated January 9, 2017.


Storm story, a masterpiece

The visual story published by the New York Times on hurricane Irma is a masterpiece. See the presentation here.

The story starts with the standard presentation of the trajectories of past hurricane on a map:

Nyt_irma_map

Maps are great at conveying location and direction but much is lost in this rendering - wind speeds, time, strength, energy, to name but a few.

The Times then switches to other chart forms to convey some of the other data. A line chart is used to convey the strength of wind speeds as the storms shake through the Atlantic. Some kind of approximation is used to straighten the trajectories along an east-west orientation.

Nyt_irma_notime

The key insight here is how strong Irma was pretty far out in the Atlantic. The lines in the background can be brought to live by clicking on them. This view omits some details - the passage of time is ignored, and location has been reduced to one dimension.

The display then switches again, and this time it shows time and wind speed.

Nyt_irma_nolocation

This shows Irma's strength, sustaining Category 5 level windss for three days. This line chart ignores location completely.

Finally, a composite metric called cyclone energy is introduced.

Nyt_irma_energy

This chart also ignores location. It does show Irma as a special storm. The storm that has reached the maximum energy by far is Ivan. Will Irma beat that standard? I am not so sure.

Each chart form has limitations. The use of multiple charts helps convey a story from multiple perspectives. A very nice example indeed.

 


A pretty good chart ruined by some naive analysis

The following chart showing wage gaps by gender among U.S. physicians was sent to me via Twitter:

Statnews_physicianwages

The original chart was published by the Stat News website (link).

I am most curious about the source of the data. It apparently came from a website called Doximity, which collects data from physicians. Here is a link to the PR release related to this compensation dataset. However, the data is not freely available. There is a claim that this data come from self reports by 36,000 physicians.

I am not sure whether I trust this data. For example:

Stat_wagegapdoctor_1

Do I believe that physicians in North Dakota earn the highest salaries on average in the nation? And not only that, they earn almost 30% more than the average physician in New York. Does the average physician in ND really earn over $400K a year? If you are wondering, the second highest salary number comes from South Dakota. And then Idaho.  Also, these high-salary states are correlated with the lowest gender wage gaps.

I suspect that sample size is an issue. They do not report sample size at the level of their analyses. They apparently published statistics at the level of MSAs. There are roughly 400 MSAs in the U.S. so at that level, on average, they have only 90 samples per MSA. When split by gender, the average sample size is less than 50. Then, they are comparing differences, so we should see the standard errors. And finally, they are making hundreds of such comparisons, for which some kind of multiple-comparisons correction is needed.

I am pretty sure some of you are doctors, or work in health care. Do those salary numbers make sense? Are you moving to North/South Dakota?

***

Turning to the Visual corner of the Trifecta Checkup (link), I have a mixed verdict. The hover-over effect showing the precise values at either axes is a nice idea, well executed.

I don't see the point of drawing the circle inside a circle.  The wage gap is already on the vertical axis, and the redundant representation in dual circles adds nothing to it. Because of this construct, the size of the bubbles is now encoding the male average salary, taking attention away from the gender gap which is the point of the chart.

I also don't think the regional analysis (conveyed by the colors of the bubbles) is producing a story line.

***

This is another instance of a dubious analysis in this "big data" era. The analyst makes no attempt to correct for self-reporting bias, and works as if the dataset is complete. There is no indication of any concern about sample sizes, after the analyst drills down to finer areas of the dataset. While there are other variables available, such as specialty, and other variables that can be merged in, such as income levels, all of which may explain at least a portion of the gender wage gap, no attempt has been made to incorporate other factors. We are stuck with a bivariate analysis that does not control for any other factors.

Last but not least, the analyst draws a bold conclusion from the overly simplistic analysis. Here, we are told: "If you want that big money, you can't be a woman." (link)

 

P.S. The Stat News article reports that the researchers at Doximity claimed that they controlled for "hours worked and other factors that might explain the wage gap." However, in Doximity's own report, there is no language confirming how they included the controls.

 


Attractive, interactive graphic challenges lazy readers

The New York Times spent a lot of effort making a nice interactive graphical feature to accompany their story about Uber's attempt to manipulate its drivers. The article is here. Below is a static screenshot of one of the graphics.

Nytimes_uber_simulation

The illustrative map at the bottom is exquisite. It has Uber cars driving around, it has passengers waiting at street corners, the cars pick up passengers, new passengers appear, etc. There are also certain oddities: all the cars go at the same speed, some strange things happen when cars visually run into each other, etc.

This interactive feature is mostly concerned with entertainment. I don't think it is possible to infer either of the two metrics listed above the chart by staring at the moving Uber cars. The metrics are the percentage of Uber drivers who are idle and the average number of minutes that a passenger waits. Those two metrics are crucial to understanding the operational problem facing Uber planners. You can increase the number of Uber cars on the road to reduce average waiting time but the trade-off is a higher idle rate among drivers.

***

One of the key trends in interactive graphics at the Times is simplication. While a lot of things are happening behind the scenes, there is only one interactive control. The only thing the reader can control is the number of drivers in the grid.

As one of the greatest producers of interactive graphics, I trust that they know what they are doing. In fact, this article describes some comments made by Gregor Aisch, who works at the Times. The gist is: very few readers play with their interactive graphics. Someone else said, "If you make a tooltip or rollover, assume no one will ever see it." I also have heard someone say (hope this is not merely a voice in my own head): "Every extra button or knob you place on the graphic, you lose another batch of readers." This might be called the law of the interactive knob, analogous to the law of the printed equation, in the realm of popular book publishing, which stipulates that every additional equation you print in a book, you lose another batch of readers.

(Note, however, that we are talking about graphics for communications here, not exploratory graphics.)

***

Several years ago, I introduced the concept of "return on effort" in this blog post. Most interactive graphics are high effort to produce. The question is whether there is enough reward for the readers. 

Junkcharts_return_on_effort_matrix


An enjoyable romp through the movies

Chris P. tipped me about this wonderful webpage containing an analysis of high-grossing movies. The direct link is here.

First, a Trifecta checkup: This thoughtful web project integrates beautifully rendered, clearly articulated graphics with the commendable objective of bringing data to the conversation about gender and race issues in Hollywood, an ambitious goal that it falls short of achieving because the data only marginally address the question at hand.

There is some intriguing just-beneath-the-surface interplay between the Q (question) and D (data) corners of the Trifecta, which I will get to in the lower half of this post. But first, let me talk about the Visual aspect of the project, which for the most part, I thought, was well executed.

The leading chart is simple and clear, setting the tone for the piece:

Polygraphfilm_bars

I like the use of color here. The colored chart titles are inspired. I also like the double color coding - notice that the proportion data are coded not just in the lengths of the bar segments but also in the opacity. There is some messiness in the right-hand-side labeling of the first chart but probably just a bug.

This next chart also contains a minor delight: upon scrolling to the following dot plot, the reader finds that one of the dots has been labeled; this is a signal to readers that they can click on the dots to reveal the "tooltips". It's a little thing but it makes a world of difference.

Polygraphfilm_dotplotwithlabel

I also enjoy the following re-imagination of those proportional bar charts from above:

Polygraphfilm_tinmen_bars

This form fits well with the underlying data structure (a good example of setting the V and the D in harmony). The chart shows the proportion of words spoken by male versus female actors over the course of a single movie (Tin Men from 1987 is the example shown here). The chart is centered in the unusual way, making it easy to read exactly when the females are allowed to have their say.

There is again a possible labeling hiccup. The middle label says 40th minute which would imply the entire movie is only 80 minutes long. (A quick check shows Tin Men is 110 minutes long.) It seems that they are only concerned with dialog, ignoring all moments of soundtrack, or silence. The visualization would be even more interesting if those non-dialog moments are presented.

***

The reason why the music and silence are missing has more to do with practicality than will. The raw materials (Data) used are movie scripts. The authors, much to their merit, acknowledge many of the problems that come with this data, starting with the fact that directors make edits to the scripts. It is also not clear how to locate each line along the duration of the movie. An assumption of speed of dialog seems to be required.

I have now moved to the Q corner of the Trifecta checkup. The article is motivated by the #OscarSoWhite controversy from a year or two ago, although by the second paragraph, the race angle has already been dropped in favor of gender, and by the end of the project, readers will have learned also about ageism but  the issue of race never returned. Race didn't come back because race is not easily discerned from a movie script, nor is it clearly labeled in a resource such as IMDB. So, the designers provided a better solution to a lesser problem, instead of a lesser solution to a better problem.

In the last part of the project, the authors tackle ageism. Here we find another pretty picture:

Polygraphfilm_ageanalysis

At the high level, the histograms tell us that movie producers prefer younger actresses (in their 20s) and middle-aged actors (forties and fifties). It is certainly not my experience that movies have a surplus of older male characters. But one must be very careful interpreting this analysis.

The importance of actors and actresses is being measured by the number of words in the scripts while the ages being analyzed are the real ages of the actors and actresses, not the ages of the characters they are playing.

Tom Cruise is still making action movies, and he's playing characters much younger than he is. A more direct question to ask here is: does Hollywood prefer to put younger rather than older characters on screen?

Since the raw data are movie scripts, the authors took the character names, and translated those to real actors and actresses via IMDB, and then obtained their ages as listed on IMDB. This is the standard "scrape-and-merge" method executed by newsrooms everywhere in the name of data journalism. It often creates data that are only marginally relevant to the problem.

 

 

 


The state of the art of interactive graphics

Scott Klein's team at Propublica published a worthy news application, called "Hell and High Water" (link) I took some time taking in the experience. It's a project that needs room to breathe.

The setting is Houston Texas, and the subject is what happens when the next big hurricane hits the region. The reference point was Hurricane Ike and Galveston in 2008.

This image shows the depth of flooding at the height of the disaster in 2008.

Propublica_galveston1

The app takes readers through multiple scenarios. This next image depicts what would happen (according to simulations) if something similar to Ike plus 15 percent stronger winds hits Galveston.

Propublica_galveston2plus

One can also speculate about what might happen if the so-called "Mid Bay" solution is implemented:

Propublica_midbay_sol

This solution is estimated to cost about $3 billion.

***

I am drawn to this project because the designers liberally use some things I praised in my summer talk at the Data Meets Viz conference in Germany.

Here is an example of hover-overs used to annotate text. (My mouse is on the words "Nassau Bay" at the bottom of the paragraph. Much of the Bay would be submerged at the height of this scenario.)

Propublica_nassaubay2

The design has a keen awareness of foreground/background issues. The map uses sparse static labels, indicating the most important landmarks. All other labels are hidden unless the reader hovers over specific words in the text.

I think plotting population density would have been more impactful. With the current set of labels, the perspective is focused on business and institutional impact. I think there is a missed opportunity to highlight the human impact. This can be achieved by coding population density into the map colors. I believe the colors on the map currently represent terrain.

***

This is a successful interactive project. The technical feats are impressive (read more about them here). A lot of research went into the articles; huge amounts of details are included in the maps. A narrative flow was carefully constructed, and the linkage between the text and the graphics is among the best I've seen.


Rethinking the index data, with modesty and clarity in mind

I discussed the rose chart used in the Environmental Performance Index (EPI) report last week. This type of data is always challenging to visualize.

One should start with an objective. If the goal is a data dump, that is to say, all you want is to deliver the raw data in its full glory to the user, then you should just print a set of data tables. This has traditionally been the delivery mechanism of choice.

If, on the other hand, your interest is communicating insights, then you need to ask some interesting questions. One such question is how do different regions and/or countries compare with each other, not just in the overall index but also in the major sub-indices?

Learning to ask such a question requires first understanding the structure of the data. As described in the previous post, the EPI is a weighted average of a bunch of sub-indices. Each sub-index measures "distance to a target," which is then converted into a scale from 0 to 100. This formula guarantees that at the aggregate level, the EPI is not going to be 0 or 100: a country would have to score 100 on all sub-indices to attain EPI perfection!

Here is a design sketch to address the question posed above:

Redo_epi_regional

For a print version, I chose several reference countries listed at the bottom that span the range of common values. In the final product, hovering over a stripe should disclose a country and its EPI. Then the reader can construct comparisons of the type: "Thailand has a value of 53, which places it between Brazil and China."

The chart reveals a number of insights. Each region stakes out its territory within the EPI scale. There are no European countries with EPI lower than 45 while there are no South Asian countries with EPI higher than 50 or so. Within each region, the distribution is very wide, and particularly so in the East Asia and Pacific region. Europe is clearly the leading region, followed by North America.

The same format can be replicated for every sub-index.

This type of graph addresses a subset of the set of all possible questions and it does so in a clear way. Modesty in your goals often helps.

 


I try hard to not hate all hover-overs. Here is one I love

One of the smart things Noah (at WNYC) showed to my class was his NFL fan map, based on Facebook data.

This is the "home" of the visualization:

Veltman_nfl_home

The fun starts by clicking around. Here are the Green Bay fans on Facebook:

Veltman_nfl_greenbay

Also, you can see these fans relative to other teams in the same division:

Veltman_nfl_afcnorth

A team like Jacksonville has a tiny footprint:

Veltman_nfl_jags

 

What makes this visualization work?

Notice the "home" image and those straight black lines. They are the "natural" regions of influence, if you assume that all fans root for the team that they are physcially closest to. 

To appreciate this, you have to look at a more generic NFL fan map (this is one from Deadspin):

Deadspin_nfl_fans

This map is informative but not as informative as it ought to be. The reference point provided here are the state boundaries but we don't have one NFL team per state. Those "Voronoi" boundaries Noah added are more reasonable reference points to compare to the Facebook fan data.

When looking at the fan map, the most important question you have is what is each team's region of influence. This work reminds me of what I wrote before about the Beer Map (link). Putting all beer labels (or NFL teams) onto the same map makes it hard to get quick answers to that question. A small-multiples presentation is more direct, as the reader can see the brands/teams one at a time.

Here, Noah makes use of interactivity to present these small multiples on the same surface. It's harder to compare multiple teams but that is a secondary question. He does have two additions in case readers want to compare multiple teams. If you click instead of mousing over a team, the team's area of influence sticks around. Also, he created tabs so you can compare teams within each division.

I usually hate hover-over effects. They often hide things that readers want (creating what Noah calls "scavenger hunts"). The hover-over effect is used masterfully here to organize the reader's consumption of the data.

***

Moving to the D corner of the Trifecta checkup. Here is Noah's comment on the data:

Facebook likes are far from a perfect method for measuring NFL fandom. In sparsely-populated areas of the country, counties are likely to have a very small sample size. People who like things on Facebook are also not a perfect cross-section of football fans (they probably skew younger, for example). Other data sources that could be used as proxies for fan interest (but are subject to their own biases) are things like: home game attendance, merchandise sales, TV ratings, or volume of tweets about a team.
 
 

 


Visualizing survey results excellently

Surveys generate a lot of data. And, if you have used a survey vendor, you know they generate a ton of charts.

I was in Germany  to attend the Data Meets Viz workshop organized by Antony Unwin. Paul and Sascha from Zeit Online presented some of their work at the German publication, and I was highly impressed by this effort to visualize survey results. (I hope the link works for you. I found that the "scroll" fails on some platforms.)

The survey questions attempted to assess the gap between West and East Germans 25 years after reunification.

The best feature of this presentation is the maintenance of one chart form throughout. This is the general format:

Zeit_workingmum_all

 

The survey asks whether working mothers is a good thing or not. They choose to plot how the percent agreeing that working mothers is good changes over time. The blue line represents the East German average and the yellow line the West German average. There is a big gap in attitude between the two sides on this issue although both regions have experienced an increase in acceptance of working mothers over time.

All the other lines in the background indicate different subgroups of interest. These subgroups are accessible via the tabs on top. They include gender, education level, and age.

The little red "i" conceals some text explaining the insight from this chart.

Hovering over the "Men" tab leads to the following visual:

Zeit_workingmum_men

Both lines for men sit under the respective average but the shape is roughly the same. (Clicking on the tab highlights the two lines for men while moving the aggregate lines to the background.)

The Zeit team really does an amazing job keeping this chart clean while still answering a variety of questions.

They did make an important choice: not to put every number on this chart. We don't see the percent disagreeing or those who are ambivalent or chose not to answer the question.

***

Like I said before, what makes this set of charts is the seamless transitions between one question and the next. Every question is given the same graphical treatment. This eliminates learning time going from one chart to the next.

Here is one using a Likert scale, and accordingly, the vertical axis goes from 1 to 7. They plotted the average score within each subgroup and the overall average:

Zeit_trustparliament

Here is one where they combined the top categories into a "Bottom 2 Box" type metric:

Zeit_smoking

***

Finally, I appreciate the nice touch of adding tooltips to the series of dots used to aid navigation.

Zeit_dotnavigation

The theme of the workshop was interactive graphics. This effort by the Zeit team is one of the best I have seen. Market researchers take note!