Visualizing electoral college politics: exercise in displaying relationships between variables

Reader Berry B. sent in a tip quite some months ago that I just pulled out of my inbox. He really liked the Washington Post's visualization of the electoral college in the Presidential election. (link)

One of the strengths of this project is the analysis that went on behind the visualization. The authors point out that there are three variables at play: the population of each state, the votes casted by state, and the number of electoral votes by state. A side-by-side comparison of the two tile maps gives a perspective of the story:

Wp_electoralcollege_maps

The under/over representation of electoral votes is much less pronounced if we take into account the propensity to vote. With three metrics at play, there is quite a bit going on. On these maps, orange and blue are used to indicate the direction of difference. Then the shade of the color codes the degree of difference, which was classified into severe versus slight (but only for one direction). Finally, solid squares are used for the comparison with population, and square outlines are for comparison with votes cast.

Pick Florida (FL) for example. On the left side, we have a solid, dark orange square while on the right, we have a square outline in dark orange. From that, we are asked to match the dark orange with the dark orange and to contrast the solid versus the outline. It works to some extent but the required effort seems more than desirable.

***

I'd like to make it easier for readers to see the interplay between all three metrics.

In the following effort, I ditch the map aesthetic, and focus on three transformed measures: share of population, share of popular vote, and share of electoral vote. The share of popular vote is a re-interpretation of what Washington Post calls "votes cast".

The information is best presented by grouping states that behaved similarly. The two most interesting subgroups are the large states like Texas and California where the residents loudly complained that their voice was suppressed by the electoral vote allocation but in fact, the allocated electoral votes were not far from their share of the popular vote! By contrast, Floridians had a more legitimate reason to gripe since their share of the popular vote much exceeded their share of the electoral vote. This pattern also persisted throughout the battleground states.

Redo_wp_electoralcollege

The hardest part of this design is making the legend:

Redo_wp_electoralcollege_legend

 

 

 


Unintentional deception of area expansion #bigdata #piechart

Someone sent me this chart via Twitter, as an example of yet another terrible pie chart. (I couldn't find that tweet anymore but thank you to the reader for submitting this.)

Uk_itsurvey_left

At first glance, this looks like a pie chart with the radius as a second dimension. But that is the wrong interpretation.

In a pie chart, we typically encode the data in the angles of the pie sectors, or equivalently, the areas of the sectors. In this special case, the angle is invariant across the slices, and the data are encoded in the radius.

Since the data are found in the radii, let's deconstruct this chart by reducing each sector to its left-side edge.

This leads to a different interpretation of the chart: it’s actually a simple bar chart, manipulated.

Redo_ukitsurvey_1

The process of the manipulation runs against what data visualization should be. It takes the bar chart (bottom right) that is easy to read, introduces slants so it becomes harder to digest (top right), and finally absorbs a distortion to go from inefficient to incompetent (left).

What is this distortion I just mentioned? When readers look at the original chart, they are not focusing on the left-side edge of each sector but they are seeing the area of each sector. The ratio of areas is not the same as the ratio of lengths. Adding purple areas to the chart seems harmless but in fact, despite applying the same angles, the designer added disproportionately more area to the larger data points compared to the smaller ones.

  Redo_ukitsurvey_2

In order to remedy this situation, the designer has to take the square root of the lengths of the edges. But of course, the simple bar chart is more effective.

 



 


Your charts need the gift of purpose

Via Twitter, I received this chart:

Wp_favorability_overall

My readers are nailing it when it comes to finding charts that deserve close study. On Twitter, the conversation revolved around the inversion of the horizontal axis. Favorability is associated with positive numbers, and unfavorability with negative numbers, and so, it seems the natural ordering should be to place Favorable on the right and Unfavorable on the left.

Ordinarily, I'd have a problem with the inversion but here, the designer used the red-orange color scheme to overcome the potential misconception. It's hard to imagine that orange would be the color of disapproval, and red, of approval!

I am more concerned about a different source of confusion. Take a look at the following excerpt:

Wp_favorability_overall inset

If you had to guess, what are the four levels of favorability? Using the same positive-negative scale discussed above, most of us will assume that going left to right, we are looking at Strongly Favorable, Favorable, Unfavorable, Strongly Unfavorable. The people in the middle are neutrals and the people on the edages are extremists.

But we'd be mistaken. The order going left to right is Favorable, Strongly Favorable, Strongly Unfavorable, Unfavorable. The designer again used tints and shades to counter our pre-conception. This is less successful because the order defies logic. It is a double inversion.

The other part of the chart I'd draw attention to is the column of data printed on the right. Each such column is an act of giving up - the designer admits he or she couldn't find a way to incorporate that data into the chart itself. It's like a footnote in a book. The problem arises because such a column frequently contains very important information. On this chart, the data are "net favorable" ratings, the proportion of Favorables minus the proportion of Unfavorables, or visually, the length of the orange bar minus the length of the red bar.

The net rating is a succinct way to summarize the average sentiment of the population. But it's been banished to a footnote.

***

Anyone who follows American politics a little in recent years recognizes the worsening polarization of opinions. A chart showing the population average is thus rather meaningless. I'd like to see the above chart broken up by party affiliation (Republican, Independent, Democrat).

This led me to the original source of the chart. It turns out that the data came from a Fox News poll but the chart was not produced by Fox News - it accompanied this Washington Post article. Further, the article contains three other charts, broken out by party affiliation, as I hoped. The headline of the article was "Bernie Sanders remains one of the most popular politicians..."

But reading three charts, printed vertically, is not the simplest matter. One way to make it easier is to gift the chart a purpose. It turns out there are no surprises among the Republican and Democratic voters - they are as polarized as one can imagine. So the real interesting question in this data is the orientation of the Independent voters - are they more likely to side with Democrats or Republicans?

Good house-keeping means when you acquire stuff, you must remove other stuff. After adding the party dimension, it makes more sense to collapse the favorability dimension - precisely by using the net favorable rating column:

Redo_wp_favorability_chart

 

 

 


Chopped legs, and abridged analyses

Reader Glenn T. was not impressed by the graphical talent on display in the following column chart (and others) in a Monkey Cage post in the Washington Post:

Wp_trumpsupporters1

Not starting column charts at zero is like having one's legs chopped off. Here's an animated gif to show what's taking place: (you may need to click on it to see the animation)

Wp_trumpassistance

Since all four numbers show up on the chart itself, there is no need to consult the vertical axis.

I wish they used a structured color coding to help fast comprehension of the key points.

***

These authors focus their attention on the effect of the "black or white cue" but the other effect of Trump supporters vs. non-supporters is many times as big.

Notice that on average 56% of Trump supporters in this study oppose mortgage assistance while 25% of non Trump supporters oppose it - a gap of about 30%.

If we are to interpret the roughly +/- 5% swing attributed to black/white cues as "racist" behavior on the part of Trump supporters, then the +/- 3% swing on the part of non-Trump supporters in the other direction should be regarded as a kind of "reverse racist" behavior. No?

So from this experiment, one should not conclude that Trump voters are racist, which is what the authors are implying. Trump voters have many reasons to oppose mortgage assistance, and racist reaction to pictures of black and white people has only a small part of play in it.

***

The reporting of the experimental results irks me in other ways.

The headline claimed that "we showed Trump voters photos of black and white Americans." That is a less than accurate description of the experiment and subsequent analysis. The authors removed all non-white Trump voters from the analysis, so they are only talking about white Trump voters.

Also, I really, really dislike the following line:

When we control for age, income, sex, education, party identification, ideology, whether the respondent was unemployed, and perceptions of the national economy — other factors that might shape attitudes about mortgage relief — our results were the same.                        

Those are eight variables they looked into for which they provided zero details. If they investigated "interaction" effects, only of pairs of variables, that would add another 28 dimensions for which they provided zero information.

The claim that "our results were the same" tells me nothing! It is hard for me to imagine that the set of 8+28 variables described above yielded exactly zero insights.

Even if there were no additional insights, I would still like to see the more sophisticated analysis that controls for all those variables that, as they admitted, shape attitudes about mortgage relief. After all, the results are "the same" so the researcher should be indifferent between the simple and the sophisticated analyses.

In the old days of printed paper, I can understand why journal editors are reluctant to print all those analyses. In the Internet age, we should put those analyses online, providing a link to supplementary materials for those who want to dig deeper.

***

On average, 56 percent of white Trump voters oppose mortgage relief. Add another 3-5 percent (rounding error) if they were cued with an image of a black person. The trouble here is that 90% of the white Trump voting respondents could have been unaffected by the racial cue and the result still holds.

While the effect may be "statistically significant" (implied but not stated by the authors), it represents a small shift in the average attitude. The fact that the "average person" responded to the racial cue does not imply that most people responded to it.

The last two issues I raised here are not specific to this particular study. They are prevalent in the reporting of psychological experiments.

 


Political winds and hair styling

Washington Post (link) and New York Times (link) published dueling charts last week, showing the swing-swang of the political winds in the U.S. Of course, you know that the pendulum has shifted riotously rightward towards Republican red in this election.

The Post focused its graphic on the urban / not urban division within the country:

Wp_trollhair

Over Twitter, Lazaro Gamio told me they are calling these troll-hair charts. You certainly can see the imagery of hair blowing with the wind. In small counties (right), the wind is strongly to the right. In urban counties (left), the straight hair style has been in vogue since 2008. The numbers at the bottom of the chart drive home the story.

Previously, I discussed the Two Americas map by the NY Times, which covers a similar subject. The Times version emphasizes the geography, and is a snapshot while the Post graphic reveals longer trends.

Meanwhile, the Times published its version of a hair chart.

Nyt_hair_election

This particular graphic highlights the movement among the swing states. (Time moves bottom to top in this chart.) These states shifted left for Obama and marched right for Trump.

The two sets of charts have many similarities. They both use curvy lines (hair) as the main aesthetic feature. The left-right dimension is the anchor of both charts, and sways to the left or right are important tropes. In both presentations, the charts provide visual aid, and are nicely embedded within the story. Neither is intended as exploratory graphics.

But the designers diverged on many decisions, mostly in the D(ata) or V(isual) corner of the Trifecta framework.

***

The Times chart is at the state level while the Post uses county-level data.

The Times plots absolute values while the Post focuses on relative values (cumulative swing from the 2004 position). In the Times version, the reader can see the popular vote margin for any state in any election. The middle vertical line is keyed to the electoral vote (plurality of the popular vote in most states). It is easy to find the crossover states and times.

The Post's designer did some data transformations. Everything is indiced to 2004. Each number in the chart is the county's current leaning relative to 2004. Thus, left of vertical means said county has shifted more blue compared to 2004. The numbers are cumulative moving top to bottom. If a county is 10% left of center in the 2016 election, this effect may have come about this year, or 4 years ago, or 8 years ago, or some combination of the above. Again, left of center does not mean the county voted Democratic in that election. So, the chart must be read with some care.

One complaint about anchoring the data is the arbitrary choice of the starting year. Indeed, the Times chart goes back to 2000, another arbitrary choice. But clearly, the two teams were aiming to address slightly different variations of the key question.

There is a design advantage to anchoring the data. The Times chart is noticeably more entangled than the Post chart. There are tons more criss-crossing. This is particularly glaring given that the Times chart contains many fewer lines than the Post chart, due to state versus county.

Anchoring the data to a starting year has the effect of combing one's unruly hair. Mathematically, they are just shifting the lines so that they start at the same location, without altering the curvature. Of course, this is double-edged: the re-centering means the left-blue / right-red interpretation is co-opted.

On the Times chart, they used a different coping strategy. Each version of their charts has a filter: they highlight the set of lines to demonstrate different vignettes: the swing states moved slightly to the right, the Republican states marched right, and the Democratic states also moved right. Without these filters, the readers would be winking at the Times's bad-hair day.

***

Another decision worth noting: the direction of time. The Post's choice of top to bottom seems more natural to me than the Times's reverse order but I am guessing some of you may have different inclinations.

Finally, what about the thickness of the lines? The Post encoded population (voter) size while the Times used electoral votes. This decision is partly driven by the choice of state versus county level data.

One can consider electoral votes as a kind of log transformation. The effect of electorizing the popular vote is to pull the extreme values to the center. This significantly simplifies the designer's life. To wit, in the Post chart (shown nbelow), they have to apply a filter to highlight key counties, and you notice that those lines are so thick that all the other countries become barely visible.

  Wp_trollhair_texas

 


Here are the cool graphics from the election

There were some very nice graphics work published during the last few days of the U.S. presidential election. Let me tell you why I like the following four charts.

FiveThirtyEight's snake chart

Snake-1106pm

This chart definitely hits the Trifecta. It is narrowly focused on the pivotal questions of election night: which candidate is leading? if current projections hold, which candidate would win? how is the margin of victory?

The chart is symmetric so that the two sides have equal length. One can therefore immediately tell which side is in the lead by looking at the middle. With a little more effort, one can also read from the chart which side has more electoral votes based only on the called states: this would be by comparing the white parts of each snake. (This is made difficult by the top-bottom mirroring. That is an unfortunate design decision - I'd would have preferred to not have the top-bottom reversal.)

The length of each segment maps to the number of electoral votes for the particular state, and the shade of colors reflect the size of the advantage.

In a great illustration of less is more, by aggregating all called states into a single white segment, and not presenting the individual results, the 538 team has delivered a phenomenal chart that is refreshing, informative, and functional.

 Compare with a more typical map:

Electoral-map

 New York Times's snake chart

Snakes must be the season's gourmet meat because the New York Times also got inspired by those reptiles by delivering a set of snake charts (link). Here's one illustrating how different demographic segments picked winners in the last four elections.

 

Nytimes_partysupport_by_income

They also made a judicious decision by highlighting the key facts and hiding the secondary ones. Each line connects four points of data but only the beginning and end of each line are labeled, inviting readers to first and foremost compare what happened in 2004 with what happened in 2016. The middle two elections were Obama wins.

This particular chart may prove significant for decades to come. It illustrates that the two parties may be arriving at a cross-over point. The Democrats are driving the lower income classes out of their party while the upper income classes are jumping over to blue.

While the chart's main purpose is to display the changes within each income segment, it does allow readers to address a secondary question. By focusing only on the 2004 endpoints, one can see the almost linear relationship between support and income level. Then focusing on the 2016 endpoints, one can also see an almost linear relationship but this is much steeper, meaning the spread is much narrower compared to the situation in 2004. I don't think this means income matters a lot less - I just think this may be the first step in an ongoing demographic shift.

This chart is both fun and easy to read, packing quite a bit of information into a small space.

 

Washington Post's Nation of Peaks

The Post prints a map that shows, by county, where the votes were and how the two Parties built their support. (Link to original)

Wpost_map_peaks

The height represents the number of voters and the width represents the margin of victory. Landslide victories are shown with bolded triangles. In the online version, they chose to turn the map sideways.

I particularly like the narratives about specific places.

This is an entertaining visual that draws you in to explore.

 

Andrew Gelman's Insight

If you want quantitative insights, it's a good idea to check out Andrew Gelman's blog.

This example is a plain statistical graphic but it says something important:

Gelman_twopercent

There is a lot of noise about how the polls were all wrong, the entire polling industry will die, etc.

This chart shows that the polls were reasonably accurate about Trump's vote share in most Democratic states. In the Republican states, these polls consistently under-estimated Trump's advantage. You see the line of red states starting to bend away from the diagonal.

If the total error is about 2%, as stated in the caption of the chart, then the average error in the red states must have been about 4%.

This basic chart advances our understanding of what happened on election night, and why the result was considered a "shock."

 

 


What if the Washington Post did not display all the data

Thanks to reader Charles Chris P., I was able to get the police staffing data to play around with. Recall from the previous post that the Washington Post made the following scatter plot, comparing the proportion of whites among police officers relative to the proportion of whites among all residents, by city.

Wp_policestaffing

In the last post, I suggested making a histogram. As you see below, the histogram was not helpful.

Redo_wp_police0

The histogram does point out one feature of the data. Despite the appearance of dots scattered about, the slopes (equivalently, angles at the origin) do not vary widely.

This feature causes problems with interpreting the scatter plot. The difficulty arises from the need to estimate dot density everywhere. This difficulty, sad to say, is introduced by the designer. It arises from using overly granular data. In this case, the proportions are recorded to one decimal place. This means that a city with 10% is shown separate from one with 10.1%. The effect is jittering the dots, which muddies up densities.

One way to solve this problem is to use a density chart (heatmap).

Redo_wp_police_1

You no longer have every city plotted but you have a better view of the landscape. You learn that most of the action occurs on the top row, especially on the top right. It turns out there are lots of cities (22% of the dataset!) with 100% white police forces.
This group of mostly small cities is obscuring the rest of the data. Notice that the yellow cells contain very little data, fewer than 10 cities each.

For the question the reporter is addressing, the subgroup of cities with 100% white police forces is trivially important. Most of these places have at least 60% white residents, frequently much higher. But if every police officer is white, then the racial balance will almost surely be "off". I now remove this subgroup from the heatmap:

Redo_wp_police_2

Immediately, you are able to see much more. In particular, you see a ridge in the expected direction. The higher the proportion of white residents, the higher the proportion of white officers.

But this view is also too granular. The yellow cells now have only one or two cities. So I collapse the cells.

  Redo_wp_police_3

More of the data lie above the bottom-left-top-right diagonal, indicating that in the U.S., the police force is skewed white on average. When comparing cities, we can take this national bias out. The following view does this.

Redo_wp_police_4c

The point indicated by the circle is the average city indicated by relative proportions of zero and zero. Notice that now, the densest regions are clustered around the 45-degree dotted diagonal.

To conclude, the Washington Post data appear to show these insights:

  • There is a national bias of whites being more likely to be in the police force
  • In about one-fifth of the cities, the entire police force is reported to be white. (The following points exclude these cities.)
  • Most cities confirm to the national bias, within an acceptable margin of error
  • There are a small number of cities worth investigating further: those that are far away from the 45-degree line through the average city in the final chart shown above.

Showing all the data is not necessarily a good solution. Indeed, it is frequently a suboptimal design choice.