Horrid stuff 2
Mirror, mirror

The sum and the parts

Over the last few years, Intrade — with headquarters in Dublin, where the gambling laws are loose — has become the biggest success story among a new crop of prediction markets. The world’s largest steel maker, Arcelor Mittal, now runs an internal market allowing its executives to predict the price of steel. Best Buy has started a market for employees to guess which DVDs and video game consoles, among other products, will be popular. Google and Eli Lilly have similar markets. The idea is to let a company’s decision-makers benefit from the collective, if often hidden, knowledge of their employees.


I haven't participated in any "prediction market" but past statistical work tells me that within each such market, you'll find say half the participants whose individual track records will be higher than the average.  Thus, you can do better than the market average if you can predict the predictors: figure out which ones would drag down your average.

In other words, averaging opinions is a double-edged sword.  While some will provide "hidden" knowledge, others may provide "bad" information, which gets averaged too.

In substance, prediction markets are no different from so-called ensemble predictors which have been studied extensively in the statistical data mining area in recent years.  I am of the opinion that such things have proven more useful in increasing the stability of error rates than in improving the average error rates themselves.

Phil's take can be read here.

Reference: "Odds Are, They'll Know '08 Winner", New York Times, Feb 13 2007.

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

失踪

This reminds me of expert interviews using the Delphi design (invented by the RAND cooperation).
"Experts" are anonymously asked to give their opinion towards a contingent and vague subject. In a second wave they asked the same questions again including some feed-back (average) from the first round. This should enhance the results of the study because real experts stay with their opinion while the others change their minds and follow the feed-back (average).

Chris Hibbert

The studies on Prediction Markets have shown them to produce better forecasts than any other mechanism they've been tested against. There seems to be an effect that people who know more bet more, and people who are relatively ill-informed learn that they are losing money and drop out. The fact that people are backing their opinions with money means that the presence of "bad information" attracts those with better information; they can make money by betting against those who are less informed.

James Surowiecki's "The Wisdom of Crowds" is a good reference. You can find a lot more at midasoracle.com, or by looking up Robin Hanson or Justin Wolfers.

Disclaimer: I'm writing Open Source Prediction Market Software called Zocalo.

Kaiser

Chris, I'll visit the websites and educate myself some more on the research out there. What is mystifying to me is what the well-informed experts might gain by diluting their predictions by averaging them with ill-formed ones.

Chris Hibbert

Experts seem to overestimate their abilities. Philip Tetlock has been studying expert's abilities; he wrote Expert Political Judgment, and gave a recent talk at the Long Now Foundation. I blogged about the talk at Overcoming Bias. Tetlock says that Prediction Markets can do better than all but the very best individual experts. If you're planning to rely on experts, the evidence seems to show that unless you've fortuitously chosen one of the top few percent, you'd do better with a market. Markets are more reliable.

The comments to this entry are closed.