Random is as random does

What is randomness? Nobody knows, or at least there’s no consensus. Everybody has some vague ideas what randomness is, but when you dig into it deeply enough you find all kinds of philosophical quandaries. If you’d like a taste of the subtleties, you could start by reading one of Gregory Chaitin’s books. Or chew on this tome.

What is a random variable? That’s easy. It’s a measurable function on a probability space. What’s a probability space? Easy too. It’s a measure space such that the measure of the entire space is 1.

Probability theory avoids defining randomness by working with abstractions like random variables. This is actually a very sensible approach and not mere legerdemain. Mathematicians can prove theorems about probability and leave the interpretation of the results to others.

As far as applications are concerned, it often doesn’t matter whether something is random in some metaphysical sense. The right question isn’t “is this system random?” but rather “is it useful to model this system as random?” Many systems that no one believes are random can still be profitably modeled as if they were random.

Probability models are just another class of mathematical models. Modeling deterministic systems using random variables should be no more shocking than, for example, modeling discrete things as continuous. For example, cars come in discrete units, and they certainly are not fluids. But sometimes it’s useful to model the flow of traffic as if it were a fluid. (And sometimes it’s not.)

Random phenomena are studied using computer simulations. And these simulations rely on random number generators, deterministic programs whose output is considered random for practical purposes. This bothers some people who would prefer a “true” source of randomness. Such concerns are usually misplaced. In most cases, replacing a random number generator with some physical source of randomness would not make a detectable difference. The output of the random number generator might even be higher quality since the measurement of the physical source could introduce a bias.

Related posts

46 thoughts on “Random is as random does

  1. Maybe what we mean by random is unpredictable, or rather, unpredictable as far as I care about the situation. I mean, a coin flip is thought of as random, but it is completely deterministic and if we could follow it closely enough (and if chaos theory allows) we could figure out the outcome; it’s just that almost all the time we can’t follow it, and so we can’t use the initial conditions to predict the final outcome. (and we don’t care – we like the unpredictability of a coin flip).

    The idea of unpredictability does fit in with Chaitin’s idea that a random sequence is uncompressible – there is no way of predicting what comes next. But it also stops us from worrying about whether something is “truly” random.

  2. you should have a look to Yao theorem about the connection of randomness and unpredicibility: A. Yao, Theory and applications of trapdoor functions, Proc. 23rd IEEE Symp. on Foundation of Computer Science, 1982, pp. 80 –91.

  3. Random number generators may be deterministic, but at least they contains the idea of “jumbling things up” so that the output seems chaotic.

    So it’s not completely ridiculous that you can use a RNG together with a rejection procedure like the Metropolis-Hastings algorithm to generate random numbers from a standard normal distribution.

    Check out this post however:
    http://www.entsophy.net/blog/?p=46

    In it you’ll find a simple recursion algorithm, that is very simple, highly “unjumbled” and involves no rejections. The first 1000 points of which will satisfy any test for Normality N(0,1) you care to use with a p-value around .999.

  4. William & many others consider coin flips & dice throws as deterministic, which is a classical physics POV. It would be valid if coins & dice were truly rigid objects rather than being comprised of molecules & atoms. The classical world of our intuition is based on a quantum level of reality where the laws are probabilistic & causality is an illusion. The uncertainty principle belies determinism.

  5. Yes, until…

    It seems to me that, far more often, programmers don’t care enough about randomness. Since the main use for random numbers in IT is generating cryptographic keys, and the expected time to crack a key is directly related to how much you know about the probability distribution of the key, programmers how blindly use whatever source of randomness comes to hand without thinking about its properties is a far bigger problem than programmers looking for test inputs who go overboard with extra randomness.

  6. ny_entrepreneur

    With

    > What is a random variable? That’s easy. It’s a measurable function on a probability space. What’s a probability space? Easy too. It’s a measure space such that the measure of the entire space is 1.

    you started to get it right but with

    > As far as applications are concerned, it often doesn’t matter whether something is random in some metaphysical sense. The right question isn’t “is this system random?” but rather “is it useful to model this system as random?” Many systems that no one believes are random can still be profitably modeled as if they were random.

    you totally blew it. You went for total nonsense. The right direction was sitting right in front of you, and you missed it.

    Here’s the solution: The outline you gave of a random variable is correct for an outline. What you missed is that such a random variable is fully capable of modeling essentially ANYTHING in practice. Walk into a lab, measure a number, leave, and now you have the value of a random variable. For any lab, and any number. The only issue is measurability, and that’s moot.

    What’s going on in that lab can be as complicated as you please and deterministic, etc.

    Where you went off the track was asking that there be something ‘random’ about a use of a random variable, and that’s nonsense. F’get about ‘random’: Any measurement or observation of any number can be the value of a random variable, and that’s about all there is to it.

    Actually, in this definition of random variable, we never say what ‘random’ means, and we don’t have to, and should not.

    In this use of random variable, we never mentioned independence, but the intuitive view of ‘randomness’ is really just independence. That the measure theory definition of random variables can cover any possible collection of independence and non-independence relationships among random variables is solidly established in the Kolmogorov extension theorem.

    With this measure theory work, the Kolmogorov extension theorem, etc., we no longer have to struggle with philosophical issues about randomness. We can f’get about randomness although in practice we may inquire about independence.

    Understand better now?

  7. Madmax: You’d have to define true randomness before you can say it doesn’t exist. Chaitin has a respectable definition of randomness, and he proves that an instance of his definition does exist.

    my_entrepreneur: Random variables are only a start. Now define conditional probability and independence. They can be defined without reference to randomness, but their motivation is mysterious without some intuitive notation of uncertainty.

  8. I like the idea of randomness as an proxy for something you can’t or won’t model.

    For example, in a strategic wargame (sticking with pen&paper / models here), you could fight a tactical battle, or simply have a weighted random result as a proxy for the tactical battle. As long as the results are comparable, you save alot of time not doing the tactical combat.

    Even if you did the tactical combat, the detailed actions of people shooting at each other are replaced by a random proxy because you can’t run the detail anless you grab yourself an AK and go for it down the mall.

  9. I remember an article decades ago that postulated a chip that could measure Brownian movement and using that as the basis for a RNG. I still question whether Brownian movement is truly random or if has it’s own frequency of repetition.

  10. Logically randomness (in its true sense) can’t exist because if it did we would not be here, nor would the Universe.

    Randomness can best be described as a lack of order, this logically is impossible to both comprehend and arrive at.

    What you can describe as randomness would equal (in any era) to a lack of knowledge as the proof that no-one else could phantom, is all it actually is.

    Certainly mathematics can’t describe it because it can only measure the difference between things… for true randomness nothing or null must be able to be described mathematically. This is impossible at themoment.

  11. ny_entrepreneur

    John:

    >: Random variables are only a start. Now define conditional probability and independence. They can be defined without reference to randomness, but their motivation is mysterious without some intuitive notation of uncertainty.

    Independence is in terms of events in sigma algebras of events being independent in the sense of elementary definitions P(A and B) = P(A)P(B). There’s a nice treatment to high generality in Neveu’s book. Conditional probability is in terms of the Radon-Nikodym theorem and, thus, gneralizes the elementary P(A|B) = P(A and B)/P(B). In effect, elementary, intuitive views of ‘random’ make sense with the elementary definitions and then get generalized.

    This approach via sigma algenras permits saying that a random variable is independent of the past of a continuous time stochastic process, e.g.. as in a Poisson process, where that past is uncountably infinitely many random variables. The Kolmogorov extension theorem has plenty of generality for this case. There’s a good proof of Kolmogorov’s result in Breiman’s book.

    All of this is just standard stuff in a course in ‘graduate probability’ — Loeve, Neveu, Breiman, Chung, etc., and this stuff nicely wipes away any confustion about randomness.

  12. my_entrepreneur: I know measure-theoretic probability. I’m just saying the definitions and theorems seem unmotivated without appealing to the intuitive notions they formalize. You can do differential geometry without any reference to geometry too, but it doesn’t make any sense.

  13. ny_entrepreneur

    > I’m just saying the definitions and theorems seem unmotivated without appealing to the intuitive notions they formalize.

    Of course. And I connected with intuitive notions in all my remarks here. E.g., the intuitive concept of ‘random’ is really what the math calls ‘independence’. The general view of independence in the math is a generalization of the intuitive view of independence in the elementary treatments of probability. The general, math treatment of conditioning is a generalization of the elementary, intuitive view of Bayes rule. Again I DID connect with intutive concepts.

    But with what I posted, all this wringing of the hands that we can’t know what random means or that it’s obscure and unknowable is just nonsense.

  14. I stopped reading your article at the end of the second paragraph:

    ”What is a random variable? That’s easy. It’s a measurable function on a probability space. What’s a probability space? Easy too. It’s a measure space such that the measure of the entire space is 1.”

    … so a variable … is a function?

  15. Juve: Yes, that’s part of the formalism of probability: a random variable is a function. And it does sound strange.

    The name is a compromise between intuitive and formal ideas of probability. When you’re rigorously proving probability theorems, you’re working with functions, σ-algebras, etc. There’s no mention of “randomness” or anything mysterious, ultimately just set theory. But terminology such as “event” and “random variable” were chosen to reflect how people interpret and apply the theorems.

    Mathematicians only began to study probability seriously around 1800, and so the subject had to use formalisms that had been developed in other contexts. Perhaps if probability had developed earlier, it would have influenced the development mathematical formalism and there would be less of a conceptual gap between formal and informal probability.

  16. SteveBrooklineMA

    I understand random variables as functions. Adding random variables essentially means convolving those functions together. The leap for me comes later, in stats class, where you are suddenly given a list of real numbers called a “random sample,” for example from a normal distribution. Now suddenly instead of adding random variables (convolving functions) we are just adding real numbers together like we did in 3rd grade. What exactly *is* a random independent sample of N numbers from the standard normal? Nobody seems to bother defining it.

  17. Assuming anything is random is basically saying that there is some sort of “magic” happening. There will always be a logical series of events leading to a result.

  18. John: My definition of true randomness with regards to physical events, is the measurement of a different outcome for the exact same initial conditions; and with regards to a mathematical approach, is the infinte series of numbers which do not exhibit any kind of pattern and which are not predictable

  19. Ozark: Are “magic” and “logic” mutually exclusive? Could there be a logic to magic? Some might say that quantum mechanics is magical and logical. It sounds something like what C. S. Lewis called the subnatural.

  20. John: With magic tricks, its only magic until the point we work out how the magician has performed the trick. So what I was trying to say is that any observable “randomness” is purely relative to our understanding at an one moment in time.

  21. I had an excellent professor in probability say, “A random variable is neither random nor a variable.” I always use that when I teach probability.

  22. Ozark: Your POV is classical & consistent with 19th century belief in strict causality & and absolute determinism. The Uncertainty Principle belies these notions.

    Bob L. Sturm: I would like to know the context of your professor’s remark.

  23. Dear Gouverneur,

    The context was probability 101. :)

    A random variable is a map, or function, from the sample space to the real number line. As such, it is not random, and it is not a variable. :)

  24. Many years ago, I took some graduate courses that attempted to pick at this particular scab. I think John is right that this stuff is deep, and fundamental. There are two hard questions: (1) what do probability statements mean?, and (2) are the standard axioms justified?

    [Long rant avoided. There is no time; let me sum up.]

    A lot of extremely smart people (von Mises, Savage, de Finetti, the logical positivists, Quine, etc. etc.) have agreed that the interpretation question is important and difficult. Their proposed answers are unsatisfactory and distasteful, even to them.

    On the axiom side, it turns out that you can either have axioms too weak to do anything much with, or so strong that they start making pronouncements of physics. We’ve opted for the latter, and so take as an axiom of physics the theorem of probability that asserts there can be no bounded divergent stationary process, no matter how much 1/f noise might look like one.

  25. SteveBrooklineMA

    “bounded divergent stationary process” … to all but a few this is a quadruple oxymoron!

  26. I like the comment of Gouverneur 04.20.12 at 07:12 very much
    >William & many others consider coin flips & dice throws as deterministic, which is a classical physics POV. It would be valid if coins & dice were truly rigid objects rather than being comprised of molecules & atoms.
    We have to think about what our idea based on before continuing the discussion. If our idea based on deterministic theory, so the randomness is nonsense, the Brownian motion is deterministic too because position and motion vector of the all element are prior defined. And I think further that the universe is deterministic. But, there are many theories that we cannot totally agreed or refused. And one of those is quantum or smt else that based on non-deterministic theory.

  27. Gouverneur: The uncertainty principle simply points out that its not possible for us to know everything. Nothing in science has ever ruled out strict causality and absolute determinism. We lack the knowledge and information necessary to test this.

  28. Madmax: The race is not always won by the swiftest runner & the fight is not always won by the fiercest warrior, but that is the way the smart money always bets.

    For almost 100 years, all the evidence has indicated that strict causality & absolute determinism are no longer viable notions. The smart money is on acausality & probabalistic lawas of physics.

    Just as classical laws were excellent models consistent with 19th century evidence, modern quantum physics provides excellent models consistent with currently available evidence. The future might find better models, but it would be amazing if they revived strict determinism.

  29. Gouverneur: Agreed it would be nice, but unfortunately the problem with revisiting strict determinism is that our understanding (as I recall from quantum physics lectures) is almost neglible. It seems that determinism works well for macro scenarios where our knowledge is more refined. My argument is that from a logical point of view, an exact same set of initial conditions should always produce the same exact outcome. We perceive randomness when the complexity of an experimental setup is beyond our ability to control those conditions, at the truest atomic level, whose depth may never be discovered and which for all we know is infinitesimally precise, and perhaps even non-quantum in nature.

  30. Madmax: Are you aware that the properties of a Bose-Einstein condensate provide experimental evidence supporting the Uncertainty Principle?

    Are you aware that the Uncertainty Principle is not a statement that our measurrment technology is inadequate; It actually is a claim that quantum level entities cannot have a precise position & a precise momentum at the same time.

    Modern physics does not expect the precise set of initial conditions to result in the same outcome every time. The laws at the quantum level are probabilistic & our macro level reality is based on the quantum level. It is only the law of large numbers which results in the illusion of strict causality for some macro level phenomena.

  31. Gouverneur: I am not questioning the uncertainty principle at all. Whilst the width of the velocity-distribution for atoms indicating the formation of a Bose-Einstein condensate may well be explained by the uncertainty principle, I don’t see how this has been used to prove such a well established principle. Unless I’ve missed something, it simply complies with it. The uncertainty principle says you can’t accurately MEASURE two related properties at the same time, but it doesn’t really say anything about the proof of randomness. Please note, it is not a statement that “quantum level entities cannot have a precise position & a precise momentum at the same time”.

    You are probably better off arguing with relation to quantum indeterminacy. Going off topic, check how it relates to our free will:
    http://jwwartick.com/2011/11/07/qm-free-will/

  32. [b]Madmax:[/b] It is not surprising that you do not accept my POV, since I am not a known Quantum Theory expert. Perhaps it would help if you went to a good library & read a book by some expert.

    Nick Herbert wrote an excellent book titled [b]Quantum Reality.[/b] On page [b]xiii[/b] of the preface he describes his erroneous [b]Disturbance Model[/b] of quantum entities, which assumes that the Uncertainty Principle relates to inadequacy of measurement technology.

    Nick took graduate courses, has taught Quantum Theory, & has done research work in the discipline. He admits to having had an incorrect POV relating to Uncertainty for many years. It is not surprising that others also have various erroneous views.

    I repeat what I have posted earlier: The Uncertainty Principle denies that a Quantum Entity can have a precise position & a precise momentum at the same time.

    Probability is not used due to either ignorance relating to deterministic laws or practical problems in the application of deterministic laws. Probability is used because the same initial conditions can result in different results: That is the Mainstream Expert POV.

  33. Gouverneur: The uncertainty principle states that the accumulated accuracy of measuring any two related variables in quantum mechanics has a very specific limit. It most certainly does not say that ” a Quantum Entity can have a precise position & a precise momentum at the same time”. It does not point out to our inadequacy of measuring them either. It says that it’s absolutely impossible MEASURE these with accuracy; as opposed to saying they can’t HAVE very specific values. Moreover, it does not point to position and momentum, specifically. It can be any two related quantities.

    Trying to use the uncertainty principle to say that anything about randomness is futile because the randomness in the error we measure is the actual randomness we ourselves introduce in the process of taking such a measurement. Nick Herbert made a very bad and fundamentally wrong assumption when he interpreted the uncertainty principle in a way which is very much against its definition so it’s not surprising he went down a dead end. It is called a “principle” for a very specific reason and it has nothing to do with an inadequacy in our measurement techniques; otherwise this would have been the focus of ongoing research.

    The point I’ve been trying to make all along is that we cannot re-create the same initial set of conditions for any scenario more than once. Anyone who says they can do this is saying they have complete knowledge of the universe, before they can even begin. For instance we could be observing the transition of a spin state for a certain particle, but not be aware that another particle, unknown to us, has since interacted with it. This is my argument.

  34. I always thought that a random entity is one that, when given 2 identical inputs, gives 2 different outputs. In other words, the output is independent of the input.

  35. rittatt: A function that returns the current time gives different output every time you call it, but it’s not random. If you don’t like that example because it takes no input, change it to a function that takes a number and adds the current time to it.

    Also, a function that is random might return the same output given the same input, though it might not. Independence is subtle, and there’s really no substitute for understanding the mathematical definition. Here’s a post that goes more into that.

  36. Brendan: Nothing in the universe has free will, as every particle is bound by the laws of nature regardless how well they are understood. It would otherwise be magic.

  37. Madmax: That is the classical Clockwork Universe theory. There are various arguments against it but the one that I subscribe to is this: If the Universe were deterministic, there would be no reason for its existence to be anything special. It could be simulated accurately on a very large computer in the same way that we run computer simulations of dynamic systems. Because of the size of the Universe, the machine would have to exist in a much larger Universe than ours, one in which it would be possible to build such a machine. If that Universe were also deterministic, it would be possible to simulate it with a machine in an even larger Universe. And on and on until you reach something that is non-deterministic and couldn’t be simulated.

    On the other hand, if the behavior of particles is random, the existence of the Universe is also nothing special. You would need a larger Universe and machine on which to run the simulations, but you could simulate Universes all day and night to your heart’s content. You could look for interesting things that might possibly happen. You could calculate probabilities and run “what if” scenarios. But why would you do this? Presumably you would do it for the same reason we do computer simulations in our Universe, to reason about the real world.

    Either of those scenarios leads to the idea that our lives and the Universe doesn’t really matter much, and that our Universe exists for the purposes of amusement or experiment of some cruel God. Certainly our Universe is either one way or the other and not both. I prefer to believe that our existence does matter and that we do have a real free will that is not an illusion. And if it does exist, it is certainly a product of Nature, either a fundamental property or an emergent one.

    If we had perfect knowledge of the state of a particle and complete knowledge of the laws governing its behavior then one of two things would be the case. Either we would predict the behavior perfectly or we would sometimes be surprised. The Uncertainty Principle implies that this theory is impossible to test and what appears to us as randomness we will never be able to know if it is only measurement error or if there might also be a tiny act of defiance against the laws.

    Therefore what we believe appears to be a choice: Causality or Free Will. I’ll take Free Will. But then again, I’m not a physicist so what do I know.

Comments are closed.