The Pearson distributions

The previous post was about 12 probability distributions named after Irving Burr. This post is about 12 probability distributions named after Karl Pearson. The Pearson distributions are better known, and include some very well known distributions.

Burr’s distributions are defined by their CDFs; Pearson’s distributions are defined by their PDFs.

Pearson’s differential equation

The densities of Pearson’s distributions all satisfy the same differential equation:

f'(x) = \frac{(x-a) f(x)}{c_0 + c_1x + c_2x^2}

This is a linear differential equation, and so multiples of a solution are also a solution. However, a probability density must integrate to 1, so there is a unique probability density solution given a, c0, c1, and c2.

Well known distributions

Note that f(x) = exp(-x²/2) satisfies the differential equation above if we set a = 0, c0 = 1, and c1 = c2 = 0. This says the normal distribution is a Pearson distribution.

If f(x) = xm exp(-x) then the differential equation is satisfied for am, c0 = −1, and c0 = c2 = 0. This says that the exponential distribution and more generally the gamma distribution are Pearson distributions.

You can also show that the Cauchy distribution and more generally the Student t distribution are also Pearson distributions. So are the beta distributions (with a transformed range).

Table of Pearson distributions

The table below lists all Pearson distributions with their traditional names. The order of the list is a little strange for historical reasons.

The table uses Iverson’s bracket notation: a Boolean expression in brackets represents the function that is 1 when the condition holds and 0 otherwise. This way all densities are defined over the entire real line, though some of them are only positive over an interval.

The densities are presented without normalization constant; the normalization constant are whatever they have to be for the function to integrate to 1. The normalization constants can be complicated functions of the parameters and so they are left out for simplicity.

\begin{align*} \text{I} \hspace{1cm} & (1+x)^{m_1} (1-x)^{m_2} \,\,[-1 \leq x \leq 1] \\ \text{II} \hspace{1cm} & (1 - x^2)^m \,\, [ -1 \leq x \leq 1] \\ \text{III} \hspace{1cm} & x^m \exp(-x) \,\, [0 \leq x] \\ \text{IV} \hspace{1cm} & (1 + x^2)^{-m} \exp(-v \arctan x) \\ \text{V} \hspace{1cm} & x^{-m} \exp(-1/x) \,\, [0 \leq x] \\ \text{VI} \hspace{1cm} & x^{m_2}(1 + x)^{-m_1} \,\,[0 \leq x] \\ \text{VII} \hspace{1cm} & (1 + x^2)^{-m}\\ \text{VIII} \hspace{1cm} & (1 + x)^{-m} \,\, [0 \leq x \leq 1] \\ \text{IX} \hspace{1cm} & (1 + x)^{m} \,\, [0 \leq x \leq 1] \\ \text{X} \hspace{1cm} & \exp(-x) \,\, [0 \leq x] \\ \text{XI} \hspace{1cm} & x^{-m} \,\,[1 \leq x] \\ \text{XII} \hspace{1cm} & \left( (g+x)(g-x)\right)^h \,\, [-g \leq x \leq g] \end{align*}

There is a lot of redundancy in the list. All the distributions are either special cases of or limiting cases of distributions I, IV, and VI.

Note that VII is the Student t distribution after you introduce a scaling factor.

Moments

The Pearson distributions are determined by their first few moments, provided these exist, and these moments can be derived from the parameters in Pearson’s differential equation.

This suggests moment matching as a way to fit Pearson distributions to data: solve for the distribution parameters that make the the exact moments match the empirical moments. Sometimes this works very well, though sometimes other approaches are better, depending on your criteria for what constitutes a good match.

2 thoughts on “The Pearson distributions

  1. If you stick the expression for IV into the differential equation you get -(v + 2m x)/(1 + x²). Put this in the form of the differential equation and you get v as a function of the parameters.

    Maybe you’re wondering whether v should be v(x) so you can derive the other distribution involving exp from IV. But v is a constant. You can derive the other distributions involving an exp by taking a limit with v and m in the right ratio.

Comments are closed.