Suppose you’ve seen a coin come up heads 10 times in a row. What do you believe is likely to happen next? Three common responses:

- Heads
- Tails
- Equal probability of heads or tails.

Each is reasonable in its own context. The last answer is correct assuming the flips are independent and heads and tails are equally likely.

But as I argued here, if you see nothing but heads, you have reason to question the assumption that the coin is fair. So there’s some justification for the first answer.

The reasoning behind the second answer is that tails are “due.” This isn’t true if you’re looking at independent flips of a fair coin, but it could reasonable in other settings, such as sampling without replacement.

Say there are a number of coins on a table, covered by a cloth. A fixed number are on the table heads up, and a fixed number tails up. You reach under the cloth and slide a coin out. Every head you pull out increases the chances that the next coin will be tails. If there were an equal number of heads and tails under the cloth to being with, then after pulling out 10 heads tails are indeed more likely next time.

**Related post**: Long runs

Fourth possibility: Whichever is opposite of what I say out loud. That is under the assumption that the coinf flipper knows how to control their throws and is looking to mess with me.

Fifth possibility: Imhotep is invisible.

1. The first answer is what Fat Tony would choose (reference taken from Taleb’s fooled by randomness philosophy, in Black Swan).

2. The second justification, I think, is a variant of the Monty Hall problem.

The final situation is subtle. If you throw some coins down on a table without looking, cover them with a cloth, and then slide them out one by one, the probability of each coin being tails is ½, right? Now if the number of heads and tails under the cloth is fixed AND KNOWN, then each tails you see increases the chance of heads and vice-versa.

Does Bayes Thereom play a part in this at all. Not sure how, it just seems like that kind of problem.

Essentially your delaying when you find the results of the flip. For example, you take a hundred coins flip them one time and they land on the table. You have no idea how many are heads, how many are tails. You cover the flipped coins and pull them out, the only thing thats changed is you are randomly discovering the results of your flipping. You may have flipped 10 heads in a row but didnt know it. Or maybe you flipped heads then tails for every flip but when you pull the coins out you may pull 10 heads in a row. Again nothing has changed from the first assuming fair conditions theres a 50/50 chance of each coin being a heads or a tails.

I really like this post, as it nicely combines some mathematical thinking (probability) with conventional thinking and reasoning. One reason people are scared of learning math is because they think it’s overwhelmingly difficult. This post shows it doesn’t have to be.

It also provides a really good intro to “philosophy of probability”, IMO :)

Is it not more of a chance ? Well and stating all possibilities might just be like stating the obvious.

#2 and #3 are simply incorrect, they are not at all ‘likely’ – and the assumptions of independent flips and equal liklihood have been violated / shown to be false, whether analyzed in a frequentist or Bayesian approach.

A frequentist approach to the analytical statistics would analyze the flips as a collective unordered set, but (obviously) would not ‘predict’ tails nor equal liklihood for future flips. (It’s not really ‘prediction’, rather the statistical inference demonstrates the fallacy of the assumptions.)

A Bayesian analysis can use the sequence of heads and again predicts heads with virtual certainty (better than 8 sigma off the top of my head).

You have to know / assume something about the process generating your data (coin tossing versus sampling without replacement in this example) to apply any sort of statistical inference. IMHO, this post is a rather roundabout way to point that out: it’s rather deceiving and irrelevant to talk about predicting coin flips as a comparison for your static sampling example.

Of course, in the real world, we rarely have either of these situations: we almost never see perfect sampling of a random process, nor do we see complete sampling of a static set. Instead, we see some unknown frequency of sampling some underlying random process (which may or may not suffer from regime switching), and we have a few barely justifiable assumptions about both the sampling frequency and the random generator….

-frank

“3.Equal probability of heads or tails

… assuming … heads and tails are equally likely”

That’s a tautology! You’re saying something is true, assuming that same thing is true. We have learnt nothing.

It’s not quite a tautology. First, the point in (3) is that people often wrongly assume any event with two outcomes has an equal probability of each. Second, they nevertheless also believe the contradictory principle that nature’s books must balance quickly. That is they believe both that each coin toss has a 50-50 chance of heads,

andthey believe that after a few heads, tails are “due.”Would it make sense to construct a confidence interval for the number of heads in 10 coin flips, and see if 10 is within that interval? And if not within the interval, reject the hypothesis that the coin is fair? Just asking, as a student.

What is a “coin”?