College courses often begin by trying to weaken your confidence in common sense. For example, a psychology course might start by presenting optical illusions to show that there are limits to your ability to perceive the world accurately. I’ve seen at least one physics textbook that also starts with optical illusions to emphasize the need for measurement. Optical illusions, however, take considerable skill to create. The fact that they are so contrived illustrates that your perception of the world is actually pretty good in ordinary circumstances.

For several years I’ve thought about the interplay of statistics and common sense. Probability is more abstract than physical properties like length or color, and so common sense is more often misguided in the context of probability than in visual perception. In probability and statistics, the analogs of optical illusions are usually called paradoxes: St. Petersburg paradox, Simpson’s paradox, Lindley’s paradox, etc. These paradoxes show that common sense can be seriously wrong, without having to consider contrived examples. Instances of Simpson’s paradox, for example, pop up regularly in application.

Some physicists say that you should always have an order-of-magnitude idea of what a result will be before you calculate it. This implies a belief that such estimates are usually possible, and that they provide a sanity check for calculations. And that’s true in physics, at least in mechanics. In probability, however, it is quite common for even an expert’s intuition to be way off. Calculations are more likely to find errors in common sense than the other way around.

Nevertheless, common sense is vitally important in statistics. Attempts to minimize the need for common sense can lead to nonsense. You need common sense to formulate a statistical model and to interpret inferences from that model. Statistics is a layer of exact calculation sandwiched between necessarily subjective formulation and interpretation. Even though common sense can go badly wrong with probability, it can also do quite well in some contexts. Common sense is necessary to map probability theory to applications and to evaluate how well that map works.

* * *

For daily tips on data science, follow @DataSciFact on Twitter.

But “common sense” needs to be trained sometimes. A huge part of becoming expert in a topic is retraining your intuition, so that you can tell when your first answer is off.

Maybe Stats 101 courses would go better if we started by training students’ statistical sense, to critique poorly-done analyses or too-small studies. *Then* we could move on to the “layer of exact calculation sandwiched between necessarily subjective formulation and interpretation” (what a great phrase!)

I enjoyed taking a MOOC on data visualization from Alberto Cairo, who got us to practice constructively critiquing data graphics before we made new ones ourselves. I found it really helpful, and I bet a similar approach could be useful for Intro To Stats courses.

I think this Talk by Hans Rosling has what optical illusion to Psychology and common sense to statistic

Showing people that how ignorant and wrong their their common sense are when compared to real data.

Cheers,

Total tangent but from this article I looked at the wikipedia page for Lindley’s paradox and the numerical example seems baffling to me. To calculate the likelihood of the outcome under H_1 the author integrates over all probabilities with no weighting. Of course this will lead to H_1 having a low posterior probability. But this just seems like a doing Bayesian statistics wrong and not a particularly interesting paradox. Is this what Lindley intended?

In probability and statistics, the analogs of optical illusions are usually called paradoxes: St. Petersburg paradox, Simpson’s paradox, Lindley’s paradox, etc.

Interesting. I don’t think of those as examples of how our ordinary probabilistic intuitions are terrible; I think of them as esoteric and extreme cases. For the ordinary Joe on the street, the lack of human common sense about probabilities is more associated with cognitive biases of the sort studied by Kahneman and Gilovich and others — resulting in people being more afraid of terrorism than of driving on the highway, or being convinced that vaccinations cause autism.

In a sense, we only need a science of statistics because our innate ability to detect signals and dismiss noise is so very bad.

I’m reminded of the atrocious literature on the probability of a decisive vote. A paper was published in a leading political science journal giving the probability of a tied vote in a presidential election as something like 10^-92. Talk about innumeracy! The calculation, of course (I say “of course” because if you are a statistician you will likely know what is coming) was based on the binomial distribution with known p. For example, Obama got something like 52% of the vote, so if you take n=130 million and p=0.52 and figure out the probability of an exact tie, you can work out the formula etc etc.

From empirical grounds that 10^-92 thing is ludicrous. You can easily get an order-of-magnitude estimate by looking at the empirical probability, based on recent elections, that the vote margin will be within 2 million votes (say) and then dividing by 2 million to get the probability of it being a tie or one vote from a tie.

The funny thing–and I think this is a case for various bad numbers that get out there–is that this 10^-92 has no intuition behind it, it’s just the product of a mindlessly applied formula (because everyone “knows” that you use the binomial distribution to calculate the probability of k heads in n coin flips). But it’s bad intuition that allows people to accept that number without screaming. A leading political science journal wouldn’t accept a claim that there were 10^92 people in some obscure country, or that some person was 10^92 feet tall. But intuitions about probabilities are weak, even among the sort of quantitatively-trained researchers who know about the binomial theorem.