You can follow this conversation by subscribing to the comment feed for this post.

It absolutely is 1/2 by definition of a "fair coin". It's just staggeringly unlikely that you would ever get to this point in reality. By all means reject the premise that the coin is fair, but then you're not objecting to the idealised statement in the book but to a different one that you've created.

Spoken like a true probabilist. It's unfortunate that probabilists like to use euphemisms like "coin" for things that they *define*. It confuses the rest of us who think that a "coin" might be some actual physical object.

Something I read recently pointed out that a consequence of a university probability education was believing that after a coin toss came up 100 heads in a row, that the probability of heads on the next toss was 0.5, when anyone with real experience of gambling would conclude that there was something wrong with the coin.

It seems to me that in your explanation above, the statistician and the probabilist are working the problem from opposite ends. The statistician is starting with the sampled data and inferring something about the population and sampling frame (in this case, the coin and the sampling procedure). The probabilist is starting with known characteristics of the population (i.e. the coin is fair) and the sampling procedure (each sample comes from an independent event), then estimates the probability of seeing a particular sample distribution. Is this fair, or have I misunderstood something?

Tom: Exactly. That's the origin of the term "inverse probability".

What if the fairness of the coin was properly established with some robust scientific method and then it came up heads 1,000,000 times? Extremely unlikely, but we are talking probability concepts here, so it is conceptually possible. I guess the book's author was trying to demonstrate that even if a guaranteed fair coin had a long streak, that does not influence the next single toss outcome.
In regards to betting on that single outcome, I'd say people who rush to make bets in such cases are betting on the low probability of the streak continuing, rather than on the constant 0.5 of the next toss.

Good explanation of the difference between probability and statistics. I wish I had this many years ago. I am only now getting a grasp on the difference between probability and null hypothesis significance testing (NHST) versus exploratory data analysis and Bayesian thinking. I was taught the former, when what I often want is the latter.

The comment by Dimitri above is interesting because it reveals a form of innumeracy. (1/2) to the millionth power is not "extremely unlikely," it's in any real physical sense impossible.

I will throw in one more twist. We are given three incompatible statements:
1. The coin is "fair." This concept is not clearly defined, but I will take it to mean that, when flipped, there is a 1/2 chance it lands heads.
2. It was flipped 1 million times.
3. All flips turned out heads.
Any two of these three statements can be true, but not all three.

One possibility is Kaiser's, that 2 and 3 are true, hence 1 is false. Another possibility is that 1 and 2 are true, but 3 are false. After all, how would we know that a coin came up heads a million times. We're taking someone's word for it. See Section 3 of this paper for a similar example.

In my daily experiences with data (albeit not a large sample size) I don't know if the coin is fair. To extend the analogy, I'm usually flipping some foregin bottle cap with little prior knowledge as to whether the cap comes up on the "Presidente" side more often than the other side.

The comments to this entry are closed.

## NEW BOOTCAMP

Top Quality Instruction

See our curriculum, instructors. Apply.
Kaiser Fung. Business analytics and data visualization expert. Author and Speaker.
Visit my website. Follow my Twitter. See my articles at Daily Beast, 538, HBR, Wired.

Amazon - Barnes&Noble

Numbersense:
Amazon - Barnes&Noble

## Search3

•  only in Big Data

## Next Events

Jan: 10 NYPL Data Science Careers Talk, New York, NY

## Past Events

Aug: 15 NYPL Analytics Resume Review Workshop, New York, NY

Apr: 2 Data Visualization Seminar, Pasadena, CA

Mar: 30 ASA DataFest, New York, NY

See more here

## Courses

R Fundamentals, Principal Analytics Prep

Numbersense: Statistical Reasoning in Practice, Principal Analytics Prep

Applied Analytics Frameworks & Methods, Columbia

The Art of Data Visualization, NYU

Signed copies at McNally-Jackson, NYC

Excerpts: Numbersense Ch. 1, 7, 8. NRYW

## Junk Charts Blog

Graphics design by Amanda Lee