The product of two normal PDFs is proportional to a normal PDF. This is well known in Bayesian statistics because a normal likelihood times a normal prior gives a normal posterior. But because Bayesian applications don’t usually need to know the proportionality constant, it’s a little hard to find. I needed to calculate this constant, so I’m recording the result here for my future reference and for anyone else who might find it useful.

Denote the normal PDF by

Then the product of two normal PDFs is given by the equation

where

and

Note that the product of two normal random variables is not normal, but the product of their PDFs is proportional to the PDF of another normal.

I think it’s particularly elegant how the proportionality constant is expressed as a “normal”.

As is almost always the case, this all becomes unambiguously nicer if you work with variances instead of standard deviations. Better still, with reciprocal variances. If your means are m,n and your reciprocal variances are t,u then the new mean is (tm+un)/(t+u) — the weighted average of the means, weighted by the reciprocal variances — and the new reciprocal variance is t+u.

(It’s even better formally, but a bit too mysterious statistically, to work with the reciprocal variance and the mean times the reciprocal variance. Then these just add. That’s because a normal PDF is exp(polynomial(x)) and these are basically just the coefficients of x^2 and x.)

For multivariate normals, if A and B are the inverses of the covariance matrices and m,n the means — so that the PDFs are exp(-1/2 (x-m)^T A (x-m)) and similarly for B,n — then this generalizes nicely: the mean is (A+B)^-1 (Am+Bn) and the inverse covariance is A+B.

g: You probably know this, but to add some jargon for others: the reciprocal variance (AKA the precision) and the mean/variance are the “natural parameters” of the Gaussian when written as a member of the exponential family.

Multivariate generalizations of the results in this post can be found, for example, in these cribs:

Gaussian identities only: http://cs.nyu.edu/~roweis/notes/gaussid.pdf

Matrix Cookbook (much larger, contains a section on Gaussians): http://www2.imm.dtu.dk/pubdb/p.php?3274

Working with the inverse variance often arises in statistical estimation theory. The inverse variance is the Fisher Information of the true value. This works just like a quantity of information should. Given two normally distributed estimates of a parameter, we can find the combined Information by simply adding the Information from each of the individual estimates. The new mean is the information-weighted average of the individual means.

It is very straightforward and intuitive to think about normals in those terms.

can you describe how to obtain the mean and variance (or concentration) of circular convolution of two normal pdfS.

Also how to obtain the mean and variance (or concentration) of product of two raleigh pdfs.

I’m not sure what you mean by “circular” convolution, but the convolution of two PDFs is the PDF of the sum of the independent random variables. If X and Y are independent normals, E(X+Y) = E(X) + E(Y) and Var(X+Y) = Var(X) + Var(Y). I haven’t looked at the product of Rayleigh random variables.