Suppose you have two ways to estimate something you’re interested in. One is biased and one is unbiased. Surely the unbiased method is better, right? Not necessarily. Statistical bias is not as bad as it sounds.

Under ideal conditions, an unbiased estimator gives the correct answer *on average*, but each particular estimate may be ridiculous. Suppose you ask me to estimate how many dwarfs were in Snow White and the Seven Dwarfs. If I alternately guess 100 and -272, each guess will be wildly wrong. But if 75% of the time I guess 100 and 25% of the time guess -272, my average guess will be 7 and so my estimates will be unbiased. But if half the time I guess 8 and half the time I guess 7, my average guess will be 7.5 and my process will be biased. However, each estimate will be more accurate.

Consistency is a weaker condition than unbiasedness. Consistency says that if you feed your method enough data generated from your assumed model, your estimates will converge to the correct value.

But if your model is not exactly correct (and it never is) will you get a reasonably good result? It’s possible for an inconsistent method to provide good results in practice and it’s possible that a consistent method may not.

In his blog post on cross validation, Rob Hyndman mentions a paper that shows one validation method is consistent and another is not. Rob concludes

Frankly, I donâ€™t consider this is a very important result as there is never a true model. In reality, every model is wrong, so consistency is not really an interesting property.

In the context of his post, Rob argues that the most important test of a statistical method is how well it predicts future data. Some people have commented that this comes down too hard on consistency. But we’re talking about a blog post, and blogs don’t use the same kind of carefully qualified language that formal papers do. Perhaps in a more formal setting Rob might argue that a gross failure of consistency gives one reason to suspect a method won’t predict well, but a lack of complete consistency shouldn’t remove a method from consideration. Such language may be inoffensive, but it lacks the verve of his original statement.

Too often bias and consistency are seen as all-or-nothing properties. In theoretical statistics, one typically asks *whether* a method is biased, not *how* biased it is. The same is true of consistency. Bias and consistency are only two criteria by which methods can be evaluated. A small amount of bias or inconsistency may be an acceptable trade-off in exchange for better performance by other criteria such as efficiency or robustness.

**Related posts**:

For daily tips on data science, follow @DataSciFact on Twitter.

Couple more examples — James-Stein estimator for mean of 3d Gaussian is biased but has uniformly lower squared error. Best unbiased estimator of 1/p for a Binomial(n,p) distributed variable has infinite variance.

In machine learning, a well known maxim is that learning is impossible without bias. It’s statistics, estimation seems unbiased because the statistician restricts learning to a single model but the bias is still there, it just happens before automatic inference starts.

Regarding bias and learning, many people prefer implicit bias to explicit bias. As long as the bias is implicit in the choice of model, we can pretend it doesn’t exist. :)

Good points and I wish you would go a little further. It seems to me that one reason the public has skepticism about findings which really are pretty consistent, e.g. , global warming, is that statisticians and other researchers exaggerate the impact of such factors as a slightly biased estimate.