« August 2017 | Main | October 2017 »

Storm story, a masterpiece

The visual story published by the New York Times on hurricane Irma is a masterpiece. See the presentation here.

The story starts with the standard presentation of the trajectories of past hurricane on a map:

Nyt_irma_map

Maps are great at conveying location and direction but much is lost in this rendering - wind speeds, time, strength, energy, to name but a few.

The Times then switches to other chart forms to convey some of the other data. A line chart is used to convey the strength of wind speeds as the storms shake through the Atlantic. Some kind of approximation is used to straighten the trajectories along an east-west orientation.

Nyt_irma_notime

The key insight here is how strong Irma was pretty far out in the Atlantic. The lines in the background can be brought to live by clicking on them. This view omits some details - the passage of time is ignored, and location has been reduced to one dimension.

The display then switches again, and this time it shows time and wind speed.

Nyt_irma_nolocation

This shows Irma's strength, sustaining Category 5 level windss for three days. This line chart ignores location completely.

Finally, a composite metric called cyclone energy is introduced.

Nyt_irma_energy

This chart also ignores location. It does show Irma as a special storm. The storm that has reached the maximum energy by far is Ivan. Will Irma beat that standard? I am not so sure.

Each chart form has limitations. The use of multiple charts helps convey a story from multiple perspectives. A very nice example indeed.

 


Report from the NBA Hackathon 2017

Yesterday, I had the honor of being one of the judges at the NBA Hackathon. This is the second edition of the Hackathon, organized by the NBA League Office's analytics department in New York. Here is Director of Basketball Analytics, Jason Rosenfeld, speaking to the crowd:

IMG_7112s_jr

The event was a huge draw - lots of mostly young basketball enthusiasts testing their hands at manipulating and analyzing data to solve interesting problems. I heard there were over 50 teams who showed up on "game day." Hundreds more applicants did not get "drafted." Many competitors came from out of town - amongst the finalists, there was a team from Toronto and one from Palo Alto.

The competition was divided into two tracks: basketball analytics, and business analytics. Those in the basketball track were challenged with problems of interest to coaches and managers. For example, they are asked to suggest a rule change that might increase excitement in the game, and support that recommendation using the voluminous spatial data. Some of these problems are hard: one involves projecting shot selection ten years out - surely fans want to know if the craze over 3-pointers will last. Nate Silver was one of the judges for the basketball analytics competition.

I was part of the business analytics judging panel, along with the fine folks shown below:

IMG_7247s_judges

The business problems are challenging as well, and really tested the competitors' judgment, as the problems are open-ended and subjective. Technical skills are also required, as very wide-ranging datasets are made available. One problem asks contestants to combine a wide number of datasets to derive a holistic way to measure "entertainment value" of a game. The other problem is even more open: do something useful and interesting with our customer files.

I visited the venue the night before, when the teams were busy digging into the data. See the energy in the room here:

IMG_7110s_work

The competitors are given 24 hours to work on the datasets. This time includes making a presentation to showcase what they have found. They are not allowed to utilize old code. I overheard several conversations between contestants and the coaches - it appeared that the datasets are in a relatively raw state, meaning quite a bit of time would have been spent organizing, exploring, cleaning and processing the data.

One of the finalists in the business competition started their presentation, telling the judges they spent 12 hours processing their datasets. It does often seem like as analysts, we are fighting with our data.

IMG_7250s_team2

This team from Toronto wrestled with the various sets of customer-indiced data, and came up with a customer segmentation scheme. They utilized a variety of advanced modeling techniques.

The other two finalists in the business competition tackled the same problem: how to measure entertainment value of a game. Their approaches were broadly similar, with each team deploying a hierarchy of regression models. Each model measures a particular contributor to entertainment value, and contains a number of indicators to predict the contribution.

Pictured below is one of the finalists, who deployed Lasso regression, a modern technique to select a subset of important factors from a large number of possibilities. This team has a nice handle on the methods, and notably, was the only team that presented error bars, showing the degree of uncertainty in their results.

IMG_7252s_team3

The winning team in the business competition went a couple of steps beyond. First, they turned in a visual interface to a decision-making tool that scores every game according to their definition of entertainment value. I surmise that they also expressed these scores in a relative way, because some of their charts show positive and negative values. Second, this team from Princeton realized the importance of tying all their regression models together into a composite score. They even allow the decision makers to shift the component weights around. Congratulations to Data Buckets! Here is the pair presenting their decision-making tool:

IMG_7249s_databuckets

Mark Tatum, deputy commissioner of the NBA League Office, presented the award to Team Data Buckets:

IMG_7279s_winner

These two are also bloggers. Look here.

After much deliberation, the basketball analytics judges liked the team representing the Stanford Sports Analytics Club.

IMG_7281s_winner

These guys tackled the very complicated problem of forecasting future trends in shot selection, using historical data.

For many, maybe most, of the participants, this was their first exposure to real-world datasets, and a short time window to deliver an end-product. Also, they must have learned quite a bit about collaboration.

The organizers should be congratulated for putting together a smoothly-run event. When you host a hackathon, you have to be around throughout the night as well. Also, the analytics department staff kindly simplified the lives of us judges by performing the first round of selection overnight.

***

Last but not least, I like to present the unofficial Best Data Graphics Award to the team known as Quire Sultans. They were a finalist in the basketball analytics contest. I am impressed with this display:

IMG_7259s_bestchart

This team presented a new metric using data on passing. The three charts are linked. The first one shows passer-passee data within a specific game; the second shows locations on the court for which passes have more favorable outcomes; the third chart measures players' over/under performance against a model.

There were quite a few graphics presented at the competition. This is one of the few in which the labels were carefully chosen and easily understood, without requiring in-depth knowledge about their analysis.


Getting into the head of the chart designer

When I look at this chart (from Business Insider), I try to understand the decisions made by its designer - which things are important to her/him, and which things are less important.

Incomegendergapbystate-both-top-2-map-v2

The chart shows average salaries in the top 2 percent of income earners. The data are split by gender and by state.

First, I notice that the designer chooses to use the map form. This decision suggests that the spatial pattern of top incomes is of top interest to the designer because she/he is willing to accept the map's constraints - namely, the designer loses control of the x and y dimensions, as well as the area and shape of the data containers. For the U.S. state map, there is no elegant solution to the large number of small states problem in the Northeast.

Second, I notice the color choice. The designer provides actual values on the visualization but also groups all state-average incomes into five categories. It's not clear how she/he determines the boundaries of these income brackets. There are many more dark blue states than there are light blue states in the map for men. Because women incomes are everywhere lower than men, the map at the bottom fits all states into two large buckets, plus Connecticut. Women incomes are lower than men but there is no need to break the data down by gender to convey this message.

Third, the use of two maps indicates that the designer does not care much about gender comparisons within each state. These comparisons are difficult to accomplish on the chart - one must involuntarily bob one's head up and down to make the comparisons. The head bobbing isn't even enough: then you must pull out your calculator and compute the ratio of women to men average. If the designer wants to highlight state-level comparisons, she/he could have plotted the gender ratio on a single map, like this:

Screen Shot 2017-09-18 at 11.47.23 PM

***

So far, I infer that the key questions are (a) the gender gap in aggregate (b) the variability of incomes within each gender, or the spatial clustering (c) the gender gap within each state.

(a) is better conveyed in more aggregate form. Goal (b) is defeated by the lack of clear clustering. (c) is not helped by the top-bottom split.

In making the above chart, I discover a pattern - that women fare better in the smaller states like Montana, Iowa, North & South Dakota. Meanwhile, the disparity in New York is of the same degree as Oklahoma and Wyoming.

  Jc_redo_top2pcincomes2b

 This chart tells readers a bit more about the underlying data, without having to print the entire dataset on the page.

 

 

 


A long view of hurricanes

This chart by Axios is well made. The full version is here.

Axios_hurricanes

It's easy to identify all the Cat 5 hurricanes. Only important ones are labeled. The other labels are hidden behind the hover. The chart provides a good answer to the question: what time of the year does the worst hurricanes strike. It's harder to compare the maximum speeds of the hurricanes.

I wish there is a way to incorporate geography. I'd be willing to trade off the trajectory of wind speeds as the max speed is of most use.