Find log normal parameters for given mean and variance

Earlier today I needed to solve for log normal parameters that yield a given mean and variance. I’m going to save the calculation here in case I needed in the future or in case a reader needs it. The derivation is simple, but in the heat of the moment I’d rather look it up and keep going with my train of thought.

NB: The parameters μ and σ² of a log normal distribution are not the mean and variance of the distribution; they are the mean and variance of its log.

If m is the mean and v is the variance then

\begin{align*} m &= \exp(\mu + \sigma^2/2) \\ v &= (\exp(\sigma^2) - 1) \exp(2\mu + \sigma^2) \end{align*}

Notice that the square of the m term matches the second part of the v term.


\frac{v}{m^2} = \exp(\sigma^2) -1

and so

\sigma^2 = \log\left(\frac{v}{m^2} + 1 \right)

and once you have σ² you can find μ by

\mu = \log m - \sigma^2/2

Here’s Python code to implement the above.

    from numpy immport log
    def solve_for_log_normal_parameters(mean, variance):
        sigma2 = log(variance/mean**2 + 1)
        mu = log(mean) - sigma2/2
        return (mu, sigma2)

And here’s a little test code for the code above.

    mean = 3.4
    variance = 5.6

    mu, sigma2 = solve_for_log_normal_parameters(mean, variance)

    X = lognorm(scale=exp(mu), s=sigma2**0.5)
    assert(abs(mean - X.mean()) < 1e-10)
    assert(abs(variance - X.var()) < 1e-10)

Related posts

2 thoughts on “Find log normal parameters for given mean and variance

  1. Benedikt Rudolph

    Parameter names in normal and lognormal distributions are consistent in the sense that if X ~ Normal(mu, sigma), then exp(X) ~ Lognormal(mu, sigma). The same is true for a *Wiener process with drift* in the sense that if dX(t) = mu*dt + sigma*dW(t), then X(t) ~ Normal(mu*t, sigma*sqrt(t)).

    This is not the case for a *geometric Brownian motion*, though! Here, typically the parametrization is chosen that S(t) follows a GBM if dS(t) = mu*S(t)*dt + sigma*S(t)*dW(t). Then, S(t) is actually lognormally distributed, but S(t) ~ Lognormal(mu – sigma^2/2, sigma). I.e. sigma is consistent with the “standard” lognormal parametrization, but mu is not. This may not sound like a big deal, but it can be extremely confusing. Especially, because here E[S(t)] = exp(mu*t).

  2. You can think of this trick as a one-to-one mapping between the moments of the normal and lognormal distributions. You can then use the parameters in a simulation of lognormal data, or you can use these estimates as a method-of-moments guess that you feed to a MLE optimization to find the MLE estimates.

    As the formulas show, the mapping from normal moments to lognormal moments is nonlinear. You can use calculus and geometry to visualize how a small change in the normal moments lead to large changes in the corresponding lognormal moments.

Comments are closed.