Andrew Gelman touches on one of my favorite topics: prediction accuracy, and experts who cling to their predictions. Here's Andrew at the Monkey Cage blog.
His starting point is a piece by sociologist Jay Livingston on how various well-known economists made vague predictions (e.g. "I see inflation around the corner") and kept clinging to them (eventually, there will be inflation).
Several theories are given to explain this behavior. One is the idea going back to Kuhn on how scientists stick to their beliefs in the face of negative evidence. Another is that the top economists have invested a lot in those now-questioned beliefs. A third idea is that when the facts and the theories collide, it is cognitively easier to manipulate the facts than to try to change one's theories.
Andrew speculates that "the cost of being wrong is less than the cost of admitting you were wrong." I think there is a certain truth to it. What if you are a famous economist whom people in the field look up to, and frequently cite in their own arguments? You know that no matter what you say, your followers will repeat, and if it's jarringly wrong, they will just keep their mouths shut, then you are under no pressure to correct yourself.
Back to the question of predictive accuracy. I read recently a press release by Microsoft in which they boasted that they predicted the NO vote in the Scottish independence referendum. They said they issued something like an 80 percent chance of NO a few days (or a week) prior to the vote. And since the result was a NO, they were proven correct! What if I had made a 60-percent NO prediction? I probably would have declared victory too.
When we make a statement about predictive accuracy, there has got to be a formula for determining what it means to be accurate. For one-time events like the NO vote, it seems hard to come up with a formula. But without such a formula, you shouldn't believe anyone's claim of accuracy!
Experts love their models so much they forget about the real world they are supposed to be modeling. They invent "null hypotheses" that no one actually believes, and find wee p-values everywhere! Then we wonder why things don't turn out the way they say they are going to, and they say that the model is fine, it's reality that's wrong.
Posted by: Nate | 10/08/2014 at 12:12 PM