An unbiased estimator, very roughly speaking, is a statistic that gives the correct result on average. For a precise definition, see Wikipedia. Unbiasedness is an intuitively desirable property. In fact, it seems indispensable at first.

In the colloquial sense, “bias” is practically synonymous with self-serving dishonesty. Who wants a self-serving, dishonest statistical estimate? But it’s important to remember that “bias” in statistical sense has a technical meaning that may not correspond to the colloquial meaning.

Here’s the big problem with statistical bias: **if U is an unbiased estimator of θ, f(U) is NOT an unbiased estimator of f(θ)** in general. For example, standard deviation is the square root of variance, but the square root of an unbiased estimator for variance is *not* an unbiased estimator for standard deviation. This shows bias has nothing to do with accuracy, since the square root of an *accurate* estimation of variance is an *accurate* estimate of standard deviation. In fact, unbiased estimators can be terrible.

**The fact that unbiasedness is not preserved under transformations calls into question its usefulness**. People seldom care directly about abstract statistical parameters directly. Instead they care about some calculation based on those parameters. An unbiased estimate of the parameters does not generally lead to an unbiased estimate of what people really want to estimate.