Andrew chimes in to defend the polling profession (link). He's responding to a piece in the London Review of Books that claims that the polls for the U.S. 2022 mid-term elections predicted a red wave that did not materialize.
He pointed to the Economist model that is an aggregation of polls that did not insert punditry type assumptions. This model is different from, say, Nate Silver's models for FiveThirtyEight, which incorporates some subjective judgement.
The poll aggregation model did remarkably well in predicting the overall outcomes in both the House and the Senate. In the House, the model predicted 225 Republican seats (208-244) compared to the eventual outcome of 222. In the Senate, the model predicted 51 Republican seats (46-55) compared to the eventual outcome of 49.
The poll aggregation model did not predict a red wave, as the interval estimates of the number of seats for both parties substantially overlap. In other words, it predicted a close election, which it turned out to be.
***
Andrew speculated that the LRB writer thought the polls predicted a red wave because he was "sticking to an existing narrative." The journalist regurgitates the "conventional wisdom" which happens to be wrong.
Andrew distinguishes between poll aggregation and poll forecasting, the latter combining poll data with punditry. He also focuses on the aggregate outcome (number of races predicted correctly), not the outcomes of individual races (e.g., did the polls get in right in state X?).
***
Andrew's post triggered a few thoughts of mine.
I dislike the conventional metrics of accuracy that treat every race as equally hard to predict. Given that most races in all U.S. elections are uncompetitive - as in, even a chimpanzee can predict the winner - a serious metric of accuracy should only consider competitive races. This issue is similar to that of grade inflation in education: the nominal scale (A-F) is misleading when all grades are compressed into the A-B range.
Worse than sticking to an existing narrative, the journalist is reinforcing a false narrative. I see the world as comprising two types of people in their relationship to data: data-first versus story-first (link). People who are story-first starts with the story and finds data that support the story. In the case of the 2022 elections, the Republican mouthpieces were everywhere pushing the narrative of the red wave. The polling data were an inconvenience that didn't matter.
Democrats can also be story-first. In fact, some Democrats might have allowed the false narrative to persist in the hope that fearing the red wave would drive their supporters to the polls.
When I came up with the data-first/story-first framework, I wanted something that is non-judgmental. Story-first is not always wrong; indeed, the data can be consistent with the story. In my experience, story-first is the default human instinct, even though being a data person, I wish the opposite. I'm definitely data-first but data-first isn't right for every decision either, especially when the data are inadequate, contradictory, or inappropriate.
The recent U.S. elections have been so close that it's virtually impossible to predict the result accurately. Statistical models are calibrated to say exactly that. The interval estimates shown above overlap substantially. This is a situation in which the statistician would be unwilling to put a strong bet on either side. For example, in a Bayesian sense, the Republicans were found to win control of the Senate in about 60% of simulations. This means the Democrats won control in the other 40% of simulations. Real life is just one simulation!
The model hitting the right ranges of results was a matter of skill in collecting and analyzing data. The model coming quite close to real life was a matter of luck. People find it frustrating when statisticians don't give a clear-cut answer. The real issue is not the statistics, it's the difference between story-first and data-first thinking. Story-first thinking is typically black and white; data-first thinking often involves shades of gray - as Andrew likes to say, we "embrace uncertainty".
P.S. [4/28/2023] Reader Mark P. has some further thoughts on this topic. His point about the inutility of election forecasting reminds me of this past post.
Recent Comments