One reason the normal distribution is easy to work with is that you can vary the mean and variance independently. With other distribution families, the mean and variance may be linked in some nonlinear way.

I was looking for a faster way to compute Prob(*X *> *Y* + δ) where *X* and *Y* are independent inverse gamma random variables. If δ were zero, the probability could be computed analytically. But when δ is positive, the calculation requires numerical integration. When the calculation is in the inner loop of a simulation, most of the simulation’s time is spent doing the integration.

Let *Z* = *Y* + δ. If *Z* were another inverse gamma random variable, we could compute Prob(*X *> *Z*) quickly and accurately without integration. Unfortunately, *Z* is not an inverse gamma. But it is *approximately *an inverse gamma, at least if *Y* has a moderately large shape parameter, which it always does in my applications. So let *Z* be inverse gamma with parameters to match the mean and variance of *Y* + δ. Then Prob(*X *> *Z*) is a good approximation to Prob(*X *> *Y* + δ).

For more details, see Fast approximation of inverse gamma inequalities.

**Related posts**:

For daily posts on probability, follow @ProbFact on Twitter.

Thank for blogging details like this. I’m a biologist with some numerical computing and mathematical background—I’ve used posts like this as a springboard many times to solve problems I run into as I’ve been working to escape the clutches of JAGS/BUGS.