It’s well known that a binomial random variable can be approximated by a Poisson random variable, and under what circumstances the approximation is particularly good. See, for example, this post.

A binomial random variable is the sum of iid (independent, identically distributed) Bernoulli random variables. But what if the Bernoulli random variables don’t have the same distribution. That is, suppose you’re counting the number of heads seen in flipping *n* coins, where each coin has a potentially different probability of coming up heads. Will a Poisson approximation still work?

This post will cite three theorems on the error in approximating a sum of *n* independent Bernoulli random variables, each with a different probability of success *p*_{i}. I’ll state each theorem and very briefly discuss its advantages. The theorems can be found in [1].

## Setup

For *i* = 1, 2, 3, …, *n* let *X*_{i} be Bernoulli random variables with

Prob(*X*_{i} = 1) = *p*_{i}

and let *X* with no subscript be their sum:

*X* = *X*_{1} + *X*_{2} + *X*_{3} + … + *X*_{n}

We want to approximate the distribution of *X* with a Poisson distribution with parameter λ. We will measure the error in the Poisson approximation by the maximum difference between the mass density function for *X* and the mass density function for a Poisson(λ) random variable.

## Sum of *p*‘s

We consider two ways to choose λ. The first is

λ = *p*_{1} + *p*_{2} + *p*_{3} + … + *p*_{n}.

For this choice we have two different theorems that give upper bounds on the approximation error. One says that the error is bounded by the sum of the squares of the *p*‘s

*p*_{1}² + *p*_{2}² + *p*_{3}² + … + *p*_{n}²

and the other says it is bounded by 9 times the maximum of the *p*‘s

9 max(*p*_{1}, *p*_{2}, *p*_{3}, …, *p*_{n}).

The sum of squares bound will be smaller when *n* is small and the maximum bound will be smaller when *n* is large.

## Sum of transformed *p*‘s

The second way to choose λ is

λ = λ_{1} + λ_{2} + λ_{3} + … + λ_{n}

where

λ_{i} = -log(1 – *p*_{i}).

In this case the bound on the error is one half the sum of the squared λ’s:

(λ_{1}² + λ_{2}² + λ_{3}² + … + λ_{n}²)/2.

When *p*_{i} is small, λ_{i} ≈ *p*_{i}. In this case the error bound for the transformed Poisson approximation will be about half that of the one above.

## Related posts

- Normal approximation to binomial
- Camp-Paulson approximation to binomial
- Relative error in normal approximations

[1] R. J. Serfling. Some Elementary Results on Poisson Approximation in a Sequence of Bernoulli Trials. SIAM Review, Vol. 20, No. 3 (July, 1978), pp. 567-579.