An estimator in statistics is a way of guessing a parameter based on data. An estimator is unbiased if over the long run, your guesses converge to the thing you’re estimating. Sounds eminently reasonable. But it might not be.

Suppose you’re estimating something like the number of car accidents per week in Texas and you counted 308 the first week. What would you estimate is the probability of seeing no accidents the next week?

If you use a Poisson model for the number of car accidents, a very common assumption for such data, there is a unique unbiased estimator. And this estimator would estimate the probability of no accidents during a week as 1. Worse, had you counted 307 accidents, the estimated probability would be -1! The estimator alternates between two ridiculous values, but in the long run these values average out to the true value. Exact in the limit, useless on the way there. A slightly biased estimator would be much more practical.

See Michael Hardy’s article for more details: An_Illuminating_Counterexample.pdf

“An estimator is unbiased if over the long run, your guesses converge to the thing youâ€™re estimating.”

Sounds more like you are defining consistency than unbiasedness.

Dean: I agree that sentence was sloppy. If you base your guess on more and more data, that’s consistency. If you take the average of more and more guesses, each based on a a fixed amount of data, that’s unbiasedness.

Sorry, but I can’t figure, why a biased estimator would be more useful? Maybe you sometimes just have to admit, that the data you’ve been collecting isn’t helpful enough, not the estimator.

Ihrhove: In this case there’s an obvious biased estimator that is far, far more accurate.

Estimators are evaluated on several criteria, including bias and efficiency. Classical statistics starts by looking for bias = 0, then picks the most efficient estimator. That’s one solution to a multi-citeria optimization problem, but far from the only one. In many applications, efficiency is far more important than bias.