I’ve written a lot about random inequalities. That’s because computers spend a lot of time computing random inequalities in the inner loop of simulations. I’m looking for ways to speed this up.

Here’s my latest idea: Approximating random inequalities with Edgeworth expansions

An Edgeworth expansion is like a Fourier series, except you use derivatives of the normal density as your basis rather than sine functions. Sometimes the full Edgeworth expansion does not converge and yet the first few terms make a good approximation. The tech report explicitly considers Edgeworth approximations with just two terms, but demonstrates the integration tricks necessary to use more terms. The result is computed in closed form, no numerical integration required, and so may be much faster than other approaches.

One advantage of the Edgeworth approach is that it only depends on the moments of the distributions in the inequality. This means it provides an approximation that’s waiting to be used on new families of distributions. But because it’s not specific to a distribution family, its performance in a particular case needs to be explored. In the case of beta distributions, for example, even a single-term approximation does pretty well.

**More blog posts on random inequalities**:

Introduction

Analytical results

Numerical results

Cauchy distributions

Beta distributions

Gamma distributions

Three or more random variables

Folded normals

A Bayesian view of Amazon Resellers

Fast approximation of beta inequalities

Shifting probability distributions

I’m not easily envisioning how random-inequalties are used in the inner loop of a simulation… Could you provide a bit of pseudo-code for context?

In many Bayesian clinical trial designs, treatment assignments or stopping conditions are based on whether the posterior probability of one thing being larger than another thing crosses a threshold. You have to evaluate that inequality for every patient, every scenario, and every simulation repetition.