Ask how you can give
A testing mess: one chart, four numbers, four colors, three titles, wrong units, wrong lengths, wrong data

Deaths as percent neither of cases nor of population. Deaths as percent of normal.

Yesterday, I posted a note about excess deaths on the book blog (link). The post was inspired by a nice data visualization by the New York Times (link). This is a great example of data journalism.


Excess deaths is a superior metric for measuring the effect of Covid-19 on public health. It's better than deaths as percent of cases. Also better than percent of the population.What excess deaths measure is deaths as a percent of normal. Normal is usually defined as the average deaths in the respective week in years past.

The red areas indicate how far the deaths in the Southern states are above normal. The highest peak, registered in Texas in late July, is 60 percent above the normal level.


The best way to appreciate the effort that went into this graphic is to imagine receiving the outputs from the model that computes excess deaths. A three-column spreadsheet with columns "state", "week number" and "estimated excess deaths".

The first issue is unequal population sizes. More populous states of course have higher death tolls. Transforming death tolls to an index pegged to the normal level solves this problem. To produce this index, we divide actual deaths by the normal level of deaths. So the spreadsheet must be augmented by two additional columns, showing the historical average deaths and actual deaths for each state for each week. Then, the excess death index can be computed.

The journalist builds a story around the migration of the coronavirus between different regions as it rages across different states  during different weeks. To this end, the designer first divides the dataset into four regions (South, West, Midwest and Northeast). Within each region, the states must be ordered. For each state, the week of peak excess deaths is identified, and the peak index is used to sort the states.

The graphic utilizes a small-multiples framework. Time occupies the horizontal axis, by convention. The vertical axis is compressed so that the states are not too distant. For the same reason, the component graphs are allowed to overlap vertically. The benefit of the tight arrangement is clearer for the Northeast as those peaks are particularly tall. The space-saving appearance reminds me of sparklines, championed by Ed Tufte.

There is one small tricky problem. In most of June, Texas suffered at least 50 percent more deaths than normal. The severity of this excess death toll is shortchanged by the low vertical height of each component graph. What forced such congestion is probably the data from the Northeast. For example, New York City:



New York City's death toll was almost 8 times the normal level at the start of the epidemic in the U.S. If the same vertical scale is maintained across the four regions, then the Northeastern states dwarf all else.


One key takeaway from the graphic for the Southern states is the persistence of the red areas. In each state, for almost every week of the entire pandemic period, actual deaths have exceeded the normal level. This is strong indication that the coronavirus is not under control.

In fact, I'd like to see a second set of plots showing the cumulative excess deaths since March. The weekly graphic is better for identifying the ebb and flow while the cumulative graphic takes measure of the total impact of Covid-19.


The above description leaves out a huge chunk of work related to computing excess deaths. I assumed the designer receives these estimates from a data scientist. See the related post in which I explain how excess deaths are estimated from statistical models.




A small problem with excess deaths is that it should also be related to previous periods/years. If a region has a previous period of lower death rates than normal, from a statistical perspective that is likely to (sooner or later) be followed by a period of excess deaths as there is a natural "time shift" of mortality rates.


M: That's exactly right. Typically, you'd use multiple years of history, and you can even estimate the standard error based on history. Most of these excesses are large enough that they are probably beyond statistical variability.

Joseph Severs

If the severity of the peak Texas death toll is ‘shortchanged’ by forcing consistent axes across regions, perhaps a more reasonable conclusion is that Texas and other southern states are handling COVID-19 better than the Northeast.


JS: I think that's a fair conclusion. The only "excuse" the Northeast officials can make is that Texas and other states have the benefit of witnessing the disaster (and mis-steps).


The EU has EuroMOMO:

"EuroMOMO is a European mortality monitoring activity, aiming to detect and measure excess deaths related to seasonal influenza, pandemics and other public health threats."

It uses Z-scores to determine statistical abnormalities.

The charts at suggest that the differences between countries are not always what they seem: the "deviation" in my own country The Netherlands seems worse than that in Sweden, often used here as an example of "too lax". (But perhaps similar to the 2018 flu.)


Rolf: Thanks for this note. I first mentioned EuroMOMO in April when the Economist started reporting on these numbers. See this post.
I don't understand those graphs. The assumed "normal" level is way off the actual statistics - just look at the everything chart, at the start of each new year. The methodology page promises that the user can select between different models and configure a few things but maybe that wasn't implemented.
The concept of a confidence interval around the normal level (implemented as z-score) is important. That just shows the variability of levels of death from year to year. To call z-score between 2 and 4 "low excess" is strange. In standard statistical convention, 2 is unlikely, 3 is rare, above 3 is extremely rare, over 15 is worse than Hillary losing the election :)

The comments to this entry are closed.