On the other hand, if the behavior of particles is random, the existence of the Universe is also nothing special. You would need a larger Universe and machine on which to run the simulations, but you could simulate Universes all day and night to your heart’s content. You could look for interesting things that might possibly happen. You could calculate probabilities and run “what if” scenarios. But why would you do this? Presumably you would do it for the same reason we do computer simulations in our Universe, to reason about the real world.

Either of those scenarios leads to the idea that our lives and the Universe doesn’t really matter much, and that our Universe exists for the purposes of amusement or experiment of some cruel God. Certainly our Universe is either one way or the other and not both. I prefer to believe that our existence does matter and that we do have a real free will that is not an illusion. And if it does exist, it is certainly a product of Nature, either a fundamental property or an emergent one.

If we had perfect knowledge of the state of a particle and complete knowledge of the laws governing its behavior then one of two things would be the case. Either we would predict the behavior perfectly or we would sometimes be surprised. The Uncertainty Principle implies that this theory is impossible to test and what appears to us as randomness we will never be able to know if it is only measurement error or if there might also be a tiny act of defiance against the laws.

Therefore what we believe appears to be a choice: Causality or Free Will. I’ll take Free Will. But then again, I’m not a physicist so what do I know.

]]>Also, a function that is random *might* return the same output given the same input, though it might not. Independence is subtle, and there’s really no substitute for understanding the mathematical definition. Here’s a post that goes more into that.

Trying to use the uncertainty principle to say that anything about randomness is futile because the randomness in the error we measure is the actual randomness we ourselves introduce in the process of taking such a measurement. Nick Herbert made a very bad and fundamentally wrong assumption when he interpreted the uncertainty principle in a way which is very much against its definition so it’s not surprising he went down a dead end. It is called a “principle” for a very specific reason and it has nothing to do with an inadequacy in our measurement techniques; otherwise this would have been the focus of ongoing research.

The point I’ve been trying to make all along is that we cannot re-create the same initial set of conditions for any scenario more than once. Anyone who says they can do this is saying they have complete knowledge of the universe, before they can even begin. For instance we could be observing the transition of a spin state for a certain particle, but not be aware that another particle, unknown to us, has since interacted with it. This is my argument.

]]>Nick Herbert wrote an excellent book titled [b]Quantum Reality.[/b] On page [b]xiii[/b] of the preface he describes his erroneous [b]Disturbance Model[/b] of quantum entities, which assumes that the Uncertainty Principle relates to inadequacy of measurement technology.

Nick took graduate courses, has taught Quantum Theory, & has done research work in the discipline. He admits to having had an incorrect POV relating to Uncertainty for many years. It is not surprising that others also have various erroneous views.

I repeat what I have posted earlier: The Uncertainty Principle denies that a Quantum Entity can have a precise position & a precise momentum at the same time.

Probability is not used due to either ignorance relating to deterministic laws or practical problems in the application of deterministic laws. Probability is used because the same initial conditions can result in different results: That is the Mainstream Expert POV.

]]>You are probably better off arguing with relation to quantum indeterminacy. Going off topic, check how it relates to our free will:

http://jwwartick.com/2011/11/07/qm-free-will/

Are you aware that the Uncertainty Principle is not a statement that our measurrment technology is inadequate; It actually is a claim that quantum level entities cannot have a precise position & a precise momentum at the same time.

Modern physics does not expect the precise set of initial conditions to result in the same outcome every time. The laws at the quantum level are probabilistic & our macro level reality is based on the quantum level. It is only the law of large numbers which results in the illusion of strict causality for some macro level phenomena.

]]>For almost 100 years, all the evidence has indicated that strict causality & absolute determinism are no longer viable notions. The smart money is on acausality & probabalistic lawas of physics.

Just as classical laws were excellent models consistent with 19th century evidence, modern quantum physics provides excellent models consistent with currently available evidence. The future might find better models, but it would be amazing if they revived strict determinism.

]]>>William & many others consider coin flips & dice throws as deterministic, which is a classical physics POV. It would be valid if coins & dice were truly rigid objects rather than being comprised of molecules & atoms.

We have to think about what our idea based on before continuing the discussion. If our idea based on deterministic theory, so the randomness is nonsense, the Brownian motion is deterministic too because position and motion vector of the all element are prior defined. And I think further that the universe is deterministic. But, there are many theories that we cannot totally agreed or refused. And one of those is quantum or smt else that based on non-deterministic theory. ]]>

[Long rant avoided. There is no time; let me sum up.]

A lot of extremely smart people (von Mises, Savage, de Finetti, the logical positivists, Quine, etc. etc.) have agreed that the interpretation question is important and difficult. Their proposed answers are unsatisfactory and distasteful, even to them.

On the axiom side, it turns out that you can either have axioms too weak to do anything much with, or so strong that they start making pronouncements of physics. We’ve opted for the latter, and so take as an axiom of physics the theorem of probability that asserts there can be no bounded divergent stationary process, no matter how much 1/f noise might look like one.

]]>The context was probability 101.

A random variable is a map, or function, from the sample space to the real number line. As such, it is not random, and it is not a variable.

]]>Bob L. Sturm: I would like to know the context of your professor’s remark.

]]>The name is a compromise between intuitive and formal ideas of probability. When you’re rigorously proving probability theorems, you’re working with functions, σ-algebras, etc. There’s no mention of “randomness” or anything mysterious, ultimately just set theory. But terminology such as “event” and “random variable” were chosen to reflect how people interpret and apply the theorems.

Mathematicians only began to study probability seriously around 1800, and so the subject had to use formalisms that had been developed in other contexts. Perhaps if probability had developed earlier, it would have influenced the development mathematical formalism and there would be less of a conceptual gap between formal and informal probability.

]]>”What is a random variable? That’s easy. It’s a measurable function on a probability space. What’s a probability space? Easy too. It’s a measure space such that the measure of the entire space is 1.”

… so a variable … is a function?

]]>