The previous post looked at the best approximation to a normal density by normal density with a different mean. Dan Piponi suggested in the comments that it would be good to look at the Kullback-Leibler (KL) divergence.
The previous post looked at the difference from between two densities from an analytic perspective, solving the problem that an analyst would find natural. This post takes an information theoretic perspective. Just is p-norms are natural in analysis, KL divergence is natural in information theory.
The Kullback-Leibler divergence between two random variables X and Y is defined as
There are many ways to interpret KL(X || Y), such as the average surprise in seeing Y when you expected X.
Unlike the p-norm distance, the KL divergence between two normal random variables can be computed in closed form.
Let X be a normal random variable with mean μX and variance σ²X and Y a normal random variable with mean μY and variance σ²Y. Then
If μX = 0 and σX = 1, then for fixed μY the value of σ²Y that minimizes KL(X || Y) is
KL divergence is not symmetric, hence we say divergence rather than distance. More on that here. If we want to solve the opposite problem, minimizing KL(X || Y), the optimal value of σ²Y is simply 1.