In an introductory probability class, the expected value of a random variable X is defined as

where f_{X} is the probability density function of X. I’ll call this the **analytical** definition.

In a more advanced class the expected value of X is defined as

where (Ω, P) is a probability space. I’ll call this the **measure-theoretic** definition. It’s not obvious that these two definitions are equivalent. They may even seem contradictory unless you look closely: they’re integrating different functions over different spaces.

If for some odd reason you learned the measure-theoretic definition first, you could see the analytical definition as a theorem. But if, like most people, you learn the analytical definition first, the measure-theoretic version is quite mysterious. When you take an advanced course and look at the details previously swept under the rug, probability looks like an entirely different subject, unrelated to your introductory course. The definition of expectation is just one concept among many that takes some work to resolve.

I’ve written a couple pages of notes that bridge the gap between the two definitions of expectation and show that they are equivalent.

Any recommendations for a book that builds the measure theoretic probability theory from the ground up?

I’ve seen A Probability Path on several bookshelves. I’ve thumbed through it and it looks OK, but I can’t say I’ve read it.

Unfortunately I can’t think of anything off-hand that I’d endorse enthusiastically. So many books on this subject get bogged down in minutia and don’t relate the measure theory to the intuitive ideas of probability.

Bummer. 🙁

“A First Look at Rigorous Probability Theory” by Rosenthal is very good. It avoids measure theory unless it is absolutely needed (as when bridging the two definitions of expectation mentioned by John) , but remains very rigorous and builds from the ground up. The first chapter, for example, introduces the extension theorem. Definitely worth a look.

I liked the way you said “Perhaps the biggest source of confusion in theoretical probability is failure to distinguish X and f sub X”. I think that’s a big confusion in “elementary” probability as well

If all observations on X fall in the interval between zero and one, and E(X) is thus a proportion, are these definitions equivalent? Something (and I’m sure it’s ignorance) is bothering me.

Try “Lectures on Measure and Integration” by Harold Wenden. It starts with set theory. I haven’t gotten far, but I have gotten further.

I learned Lebesgue integrals years ago from Cramer, Mathematical Methods of Statistics, a book I still use. But it is not for everyone.