« June 2019 | Main | August 2019 »

It's hot even in Alaska

A twitter user pointed to the following chart, which shows that Alaska has experienced extreme heat this summer, with the July statewide average temperature shattering the previous record;

Alaskaheat

This column chart is clear in its primary message: the red column shows that the average temperature this year is quite a bit higher than the next highest temperature, recorded in July 2004. The error bar is useful for statistically-literate people - the uncertainty is (presumably) due to measurement errors. (If a similar error bar is drawn for the July 2004 column, these bars probably overlap a bit.)

The chart violates one of the rules of making column charts - the vertical axis is truncated at 53F, thus the heights or areas of the columns shouldn't be compared. This violation was recently nominated by two dataviz bloggers when asked about "bad charts" (see here).

Now look at the horizontal axis. These are the years of the top 20 temperature records, ordered from highest to lowest. The months are almost always July except for the year 2004 when all three summer months entered the top 20. I find it hard to make sense of these dates when they are jumping around.

In the following version, I plotted the 20 temperatures on a chronological axis. Color is used to divide the 20 data points into four groups. The chart is meant to be read top to bottom. 

Redo_junkcharts_alaska_heat

 


Powerful photos visualizing housing conditions in Hong Kong

I was going to react to Alberto's post about the New York Times's article about economic inequality in Hong Kong, which is proposed as one origin to explain the current protest movement. I agree that the best graphic in this set is the "photoviz" showing the "coffins" or "cages" that many residents live in, because of the population density. 

Nyt_hongkong_apartment_photoviz

Then I searched the archives, and found this old post from 2015 which is the perfect response to it. What's even better, that post was also inspired by Alberto.

The older post featured a wonderful campaign by human rights organization Society for Community Organization that uses photoviz to draw attention to the problem of housing conditions in Hong Kong. They organized a photography exhibit on this theme in 2014. They then updated the exhibit in 2016.

Here is one of the iconic photos by Benny Lam:

Soco_trapped_B1

I found more coverage of Benny's work here. There is also a book that we can flip on Vimeo.

In 2017, the South China Morning Post (SCMP) published drone footage showing the outside view of the apartment buildings.

***

What's missing is the visual comparison to the luxury condos where the top 1 percent live. For these, one can  visit the real estate sites, such as Sotheby's. Here is their "12 luxury homes for sales" page.

Another comparison: a 1000 sq feet apartment that sits between those extremes. The photo by John Butlin comes from SCMP's Post Magazine's feature on the apartment:

Butlin_scmp_home

***

Also check out my review of Alberto's fantastic, recent book, How Charts Lie.

Cairo_howchartslie_cover

 

 


What is a bad chart?

In the recent issue of Madolyn Smith’s Conversations with Data newsletter hosted by DataJournalism.com, she discusses “bad charts,” featuring submissions from several dataviz bloggers, including myself.

What is a “bad chart”? Based on this collection of curated "bad charts", it is not easy to nail down “bad-ness”. The common theme is the mismatch between the message intended by the designer and the message received by the reader, a classic error of communication. How such mismatch arises depends on the specific example. I am able to divide the “bad charts” into two groups: charts that are misinterpreted, and charts that are misleading.

 

Charts that are misinterpreted

The Causes of Death entry, submitted by Alberto Cairo, is a “well-designed” chart that requires “reading the story where it is inserted and the numerous caveats.” So readers may misinterpret the chart if they do not also partake the story at Our World in Data which runs over 1,500 words not including the appendix.

Ourworldindata_causesofdeath

The map of Canada, submitted by Highsoft, highlights in green the provinces where the majority of residents are members of the First Nations. The “bad” is that readers may incorrectly “infer that a sizable part of the Canadian population is First Nations.”

Highsoft_CanadaFirstNations

In these two examples, the graphic is considered adequate and yet the reader fails to glean the message intended by the designer.

 

Charts that are misleading

Two fellow bloggers, Cole Knaflic and Jon Schwabish, offer the advice to start bars at zero (here's my take on this rule). The “bad” is the distortion introduced when encoding the data into the visual elements.

The Color-blindness pictogram, submitted by Severino Ribecca, commits a similar faux pas. To compare the rates among men and women, the pictograms should use the same baseline.

Colourblindness_pictogram

In these examples, readers who correctly read the charts nonetheless leave with the wrong message. (We assume the designer does not intend to distort the data.) The readers misinterpret the data without misinterpreting the graphics.

 

Using the Trifecta Checkup

In the Trifecta Checkup framework, these problems are second-level problems, represented by the green arrows linking up the three corners. (Click here to learn more about using the Trifecta Checkup.)

Trifectacheckup_img

The visual design of the Causes of Death chart is not under question, and the intended message of the author is clearly articulated in the text. Our concern is that the reader must go outside the graphic to learn the full message. This suggests a problem related to the syncing between the visual design and the message (the QV edge).

By contrast, in the Color Blindness graphic, the data are not under question, nor is the use of pictograms. Our concern is how the data got turned into figurines. This suggests a problem related to the syncing between the data and the visual (the DV edge).

***

When you complain about a misleading chart, or a chart being misinterpreted, what do you really mean? Is it a visual design problem? a data problem? Or is it a syncing problem between two components?


SCMP's fantastic infographic on Hong Kong protests

In the past month, there have been several large-scale protests in Hong Kong. The largest one featured up to two million residents taking to the streets on June 16 to oppose an extradition act that was working its way through the legislature. If the count was accurate, about 25 percent of the city’s population joined in the protest. Another large demonstration occurred on July 1, the anniversary of Hong Kong’s return to Chinese rule.

South China Morning Post, which can be considered the New York Times of Hong Kong, is well known for its award-winning infographics, and they rose to the occasion with this effort.

This is one of the rare infographics that you’d not regret spending time reading. After reading it, you have learned a few new things about protesting in Hong Kong.

In particular, you’ll learn that the recent demonstrations are part of a larger pattern in which Hong Kong residents express their dissatisfaction with the city’s governing class, frequently accused of acting as puppets of the Chinese state. Under the “one country, two systems” arrangement, the city’s officials occupy an unenviable position of mediating the various contradictions of the two systems.

This bar chart shows the growth in the protest movement. The recent massive protests didn't come out of nowhere. 

Scmp_protestsovertime

This line chart offers a possible explanation for burgeoning protests. Residents’ perceived their freedoms eroding in the last decade.

Scmp_freedomsurvey

If you have seen videos of the protests, you’ll have noticed the peculiar protest costumes. Umbrellas are used to block pepper sprays, for example. The following lovely graphic shows how the costumes have evolved:

Scmp_protestcostume

The scale of these protests captures the imagination. The last part in the infographic places the number of protestors in context, by expressing it in terms of football pitches (as soccer fields are known outside the U.S.) This is a sort of universal measure due to the popularity of football almost everywhere. (Nevertheless, according to Wikipedia, the fields do not have one fixed dimension even though fields used for international matches are standardized to 105 m by 68 m.)

Scmp_protestscale_pitches

This chart could be presented as a bar chart. It’s just that the data have been re-scaled – from counting individuals to counting football pitches-ful of individuals. 

***
Here is the entire infographics.


Three estimates, two differences trip up an otherwise good design

Reader Fernando P. was baffled by this chart from the Perception Gap report by More in Common. (link to report)

Moreincommon_perceptiongap_republicans

Overall, this chart is quite good. Its flaws are subtle. There is so much going on, perhaps even the designer found it hard to keep level.

The title is "Democrat's Perception Gap" which actually means the gap between Democrats' perception of Republicans and Republican's self-reported views. We are talking about two estimates of Republican views. Conversely, in Figure 2 (not shown), the "Republican's Perception Gap" describes two estimates of Democrat views.

The gap is visually shown as the gray bar between the red dot and the blue dot. This is labeled perception gap, and its values are printed on the right column, also labeled perception gap.

Perhaps as an after-thought, the designer added the yellow stripes, which is a third estimate of Republican views, this time by Independents. This little addition wreaks havoc. There are now three estimates - and two gaps. There is a new gap, between Independents' perception of Republican views, and Republican's self-reported views. This I-gap is hidden in plain sight. The words "perception gap" obstinately sticks to the D-gap.

***

Here is a slightly modified version of the same chart.

Redo_perceptiongap_republicans

 

The design focuses attention on the two gaps (bars). It also identifies the Republican self-perception as the anchor point from which the gaps are computed.

I have chosen to describe the Republican dot as "self-perception" rather than "actual view," which connotes a form of "truth." Rather than considering the gap as an error of estimation, I like to think of the gap as the difference between two groups of people asked to estimate a common quantity.

Also, one should note that on the last two issues, there is virtual agreement.

***

Aside from the visual, I have doubts about the value of such a study. Only the most divisive issues are being addressed here. Adding a few bipartisan issues would provide controls that can be useful to tease out what is the baseline perception gap.

I wonder whether there is a self-selection in survey response, such that people with extreme views (from each party) will be under-represented. Further, do we believe that all survey respondents will provide truthful answers to sensitive questions that deal with racism, sexism, etc.? For example, if I am a moderate holding racist views, would I really admit to racism in a survey?

 

 


Putting the house in order, two Brexit polls

Reader Steve M. noticed an oversight in the Guardian in the following bar chart (link):

Guardian_Brexitpoll_1

The reporter was discussing an important story that speaks to the need for careful polling design. He was comparing two polls, one by Ipsos Mori, and one by YouGov, that estimates the vote support for each party in the future U.K. general election. The bottom line is that the YouGov poll predicts about double the support for the Brexit Party than the Ipsos-Mori poll.

The stacked bar chart should only be used for data that can be added up. Here, we should be comparing the numbers side by side:

Redo_junkcharts_brexitpoll_1

I've always found this standard display inadequate. The story here is the gap in the two bar lengths for the Brexit Party. A secondary story is that the support for the Brexit Party might come from voters breaking from Labour. In other words, we really want the reader to see:

Redo_junkcharts_brexitpoll_1b

Switching to a dot plot helps bring attention to the gaps:

Redo_junkcharts_brexitpoll_2

Now, putting the house in order:

Redo_junkcharts_brexitpoll_2b

Why do these two polls show such different results? As the reporter explained, the answer is in how the question was asked. The Ipsos-Mori is unprompted, meaning the Brexit Party was not announced to the respondent as one of the choices while the YouGov is prompted.

This last version imposes a direction on the gaps to bring out the secondary message - that the support for Brexit might be coming from voters breaking from Labour.

Redo_junkcharts_brexitpoll_2c

 

 


Tightening the bond between the message and the visual: hello stats-cats

The editors of ASA's Amstat News certainly got my attention, in a recent article on school counselling. A research team asked two questions. The first was HOW ARE YOU FELINE?

Stats and cats. The pun got my attention and presumably also made others stop and wonder. The second question was HOW DO YOU REMEMBER FEELING while you were taking a college statistics course? Well, it's hard to imagine the average response to that question would be positive.

What also drew me to the article was this pair of charts:

Counselors_Figure1small

Surely, ASA can do better. (I'm happy to volunteer my time!)

Rotate the chart, clean up the colors, remove the decimals, put the chart titles up top, etc.

***

The above remedies fall into the V corner of my Trifecta checkup.

Trifectacheckup_junkcharts_imageThe key to fixing this chart is to tighten the bond between the message and the visual. This means working that green link between the Q and V corners.

This much became clear after reading the article. The following paragraphs are central to the research (bolding is mine):

Responses indicated the majority of school counselors recalled experiences of studying statistics in college that they described with words associated with more unpleasant affect (i.e., alarm, anger, distress, fear, misery, gloom, depression, sadness, and tiredness; n = 93; 66%). By contrast, a majority of counselors reported same-day (i.e., current) emotions that appeared to be associated with more pleasant affect (i.e., pleasure, happiness, excitement, astonishment, sleepiness, satisfaction, and calm; n = 123; 88%).

Both recalled emotive experiences and current emotional states appeared approximately balanced on dimensions of arousal: recalled experiences associated with lower arousal (i.e., pleasure, misery, gloom, depression, sadness, tiredness, sleepiness, satisfaction, and calm, n = 65, 46%); recalled experiences associated with higher arousal (i.e., happiness, excitement, astonishment, alarm, anger, distress, fear, n = 70, 50%); current emotions associated with lower arousal (n = 60, 43%); current experiences associated with higher arousal (i.e., n = 79, 56%).

These paragraphs convey two crucial pieces of information: the structure of the analysis, and its insights.

The two survey questions measure two states of experiences, described as current versus recalled. Then the individual affects (of which there were 16 plus an option of "other") are scored on two dimensions, pleasure and arousal. Each affect maps to high or low pleasure, and separately to high or low arousal.

The research insight is that current experience was noticably higher than recalled experience on the pleasure dimension but both experiences were similar on the arousal dimension.

Any visualization of this research must bring out this insight.

***

Here is an attempt to illustrate those paragraphs:

Redo_junkcharts_amstat_feline

The primary conclusion can be read from the four simple pie charts in the middle of the page. The color scheme shines light on which affects are coded as high or low for each dimension. For example, "distressed" is scored as showing low pleasure and high arousal.

A successful data visualization for this situation has to bring out the conclusion drawn at the aggregated level, while explaining the connection between individual affects and their aggregates.