Having picked up and breezed through David Moore's book The Opinion Makers, I am reminded of the promised second installment of my "close reading" of Charles Seife's Proofiness. Previously, I covered the first part of the book; the second part is the first part applied to political issues. (My summary review of Proofiness is here.)
Moore was a former Gallup pollster; perhaps he would consider himself a "reformed" pollster. He shares the same view with Seife as it relates to political polling: that polling has been usurped by the political apparatus, and abused by the mass media; that polls do not reflect the so-called "public opinion"; that politicians are foolhardy to rule by poll results; and that journalists (and also politicians) create polls to validate their agendas or preconceived stories. In short, they think polls usurp democracy, an ironic conclusion as their inventors had promised the opposite.
***
Moore's book is short (about 150 pages) and feels even shorter because he has one main thesis only but what a powerful thesis -- all modern poll results are misleading because they include the views of large numbers of people who have no knowledge or interest or conviction on the topic at hand; when a category of "don't know" is reported, it is almost always a criminal underestimate of the real portion of people who don't know or care. In poll after poll, the real insight is that most Americans do not follow major political events and thus do not have the knowledge or have any stable opinion of pretty much anything. But the pollsters don't tell you that.
This trickery is accomplished in two ways. First, the so-called "forced choice" design states all questions as yes/no, and the reported "don't know" comes about only if the respondent refuses to go with the two choices. In the rare poll in which an explicit "don't know" is available as a choice, the number selecting "don't knows" is manyfold higher.
Second, to forestall "don't know" type answers, the pollsters often feed information to the respondents, such as "This morning, Egyptian strongman resigned. What do you think ...?" Those people who are not aware of this development become aware of it by answering the poll! As Moore rightfully asserts, "Once respondents have been fed any information, they no longer represent the general public (p.146)
One measure of the failure of polling is that the respondents are found to have very low conviction on their expressed opinions. When asked whether they would be "upset" if the policy they oppose gets implemented, a surprisingly large number of respondents claim they don't care.
For the statistically-minded, I'd suggest jumping from Chapter 1 directly to Chapters 7-8 where Moore discusses some statistical topics. Chapter 7 contains some materials on low and declining response rates (80% in the 1960s to below 25% today), and an unresolved question of whether nonresponders are actually different from responders. Chapter 8 covers a variety of poll design issues, like how wording and order of questions affect response dramatically, and the rise of cell phones and the Internet. Seife, in his Chapter 4, has a nice treatment of the wording issue.
The other Chapters in The Opinion Makers details a sequence of examples illustrating Moore's main thesis. It forms a kind of chronology of major polling efforts in the recent past, including the controversies.
***
Both Seife and Moore look at pre-election polls that are used to "call" elections. They accept the "conventional wisdom" among pollsters that past errors are primarily due to "poor timing": voters change their minds between the time they were polled and when they cast their ballot. This is certainly plausible but in my view, they could have taken this line of thinking to its logical conclusion: that the average person doesn't treat our politics seriously enough to have any real opinions; positions they have are subject to change; thus, pre-election polls are a waste of time, possibly excepting the ones done within days of the elections. On exit polls, Seife does ask this question: whether our life would be diminished if exit polls didn't exist.
Both Seife and Moore lament that polls have become a "journalistic invention" used to create "pseudo-events" which allow journalists to write what they would have written without the polls. Moore believes the watershed was when the major polling companies started pairing up with national media outlets, losing their independent status.
***
Both Seife (Chapter 4) and Moore (Chapter 3) recount the lesson of the Literary Digest straw poll, which failed spectacularly in predicting Alf Landon to win the 1936 presidential election against Roosevelt by a landslide. The lesson is that a large sample is necessary but not sufficient for an accurate prediction; because Literary Digest drew its sample from automobile and telephone lists – neither good was as ubiquitous then as they are now – the sample was biased against lower-income people who were more likely to vote Democrat.
Seife took this as an opportunity to explain two types of polling errors, technically known as bias and variance. Variance is the variability of results from sample to sample, and is captured by the margin of error of the poll. Bias is a systematic error caused by selecting a sample that does not adequately represent the population. This is really a lovely real-life example illustrating the concepts.
The narrative then took a wrong turn when Seife stated: "[Literary Digest] thought their sample size made their poll accurate to within a tiny fraction of a percent, when in fact it couldn’t be trusted within ten or fifteen points." (p.107)
Unfortunately, bias is not measured as plus/minus points. A good analogy is an archer. Archer V(ariance) sprays his arrows around the bull’s eye, frequently missing the target but on average, his aim is at the bull’s eye. Archer B(ias) hits the same spot most of the time but because of a slight tilt in his motion, his arrows consistently hit a spot away from the bull’s eye. We can estimate V’s error using a concentric circle around the bull’s eye but to describe B’s error, we use the distance between the bull’s eye and the spot that B typically hits.
In the case of the straw poll, the concentric circle is negligibly small because of the enormous sample size; the bias should be measured as a shift downward of the percent voting for Landon – it is an error in one direction.
***
Seife’s coverage is broader than polls. He argues that our politics has been corrupted by bad statistics in a variety of other areas, including vote counts, voting schemes, gerry-mandering, census politics, ignorance of statistics by Supreme Court justices, and government propaganda, (Chapters 4 to 8). Both Seife and Moore stress that both political parties are guilty, and give ample examples to demonstrate their wayward behavior.
The second part of Proofiness covers materials that are less trodden by others, and is well worth the read.
***
A rich open question in polling is whether nonresponse constitutes bias. Moore seems undecided on this point as he cites nonresponse as a source of bias when explaining the spectacular failure of the Literary Digest 1936 straw poll while also describing research by Pew claiming that nonresponders are not meaningfully different from responders. This subject is, unfortunately, almost impossible to study because by definition, nonresponders don’t want to talk to us.
My perspective is that they could have taken this range of considering to its reasonable conclusion is that the person doesn't cure our nation-wide politics seriously enough to have any actual opinions.
Posted by: טיפול בטחורים | 01/17/2012 at 12:13 PM