I often lurk on religious debate forums, and one of the things I've noticed over the years is that various arguments presented by Christian apologists seem to go in and out of fashion, not unlike bell bottoms and baggy pants. At the moment, something called the Kalam Cosmological Argument (KCA) seems to be in vogue. KCA is a modern riff on the classical cosmological argument, which goes back to antiquity. The Kalam variation goes like this:
Whatever begins to exist has a cause;
The universe began to exist;
Therefore:
The universe has a cause.
If the universe has a cause, then an uncaused, personal Creator of the universe exists, who sans the universe is beginningless, changeless, immaterial, timeless, spaceless and enormously powerful;
Therefore:
An uncaused, personal Creator of the universe exists, who sans the universe is beginningless, changeless, immaterial, timeless, spaceless and enormously powerful.I've never understood how you get from "uncaused cause" to "personal creator", and I've particularly never understood how you get from "personal creator" in general to Jesus in particular. I have yet to find an apologist willing to even try to explain that one to me. I think I scare them.
But it turns out that the cosmological argument in general, and the KCA variation in particular, can be debunked before you even get to that question because it is simply not true that whatever begins to exist has a cause. There are at least two examples in nature of things that begin to exist without causes. Vacuum fluctuations are the spontaneous creation of particles and their associated anti-particles. Normally these just annihilate each other almost immediately after their creation, but in some circumstances they can create observable effects, so there is no question that they really do happen. The second example is radioactive decay, in which an atom of one element emits a particle and in the process becomes an atom of a different element. Both vacuum fluctuations and radioactive decay are random events. They have no cause. And yet they result in things beginning to exist.
If you believe all that then you can stop reading now. The rest of this post is for those of you who don't believe my bald assertions and demand proof (which is perfectly fine, BTW. You should never accept anything as true simply because someone says so.) In particular, my claim that quantum randomness is truly random is often met with legitimate skepticism, so I thought it would be worthwhile writing down why this is (extremely likely to be) true. In the process of formulating this argument I came up with a completely different and much more powerful (IMHO) refutation of the cosmological argument, which I will write about in the second part of this series.
Causes
Because the issues are subtle I'm going to have to go into some excruciating detail, starting with what it means for something to be caused.Let's start with a simple example: I flip a light switch and the light comes on. We would say that my flipping the switch caused the light to come on. Actually, my flipping of the switch was the beginning of a chain of "causal events", each one of which was caused by the previous event in the chain: My flipping of the switch caused the completion of a circuit, which caused electricity to flow, which caused the filament of a light bulb to heat up (or, nowadays, the electrons in the atoms of a PN-junction in an LED to become excited) which caused some photons to begin to exist. If I hook my light switch up to an Alexa, I can literally say, "Let there be light" and cause light to begin to exist.
Why do we say that I caused the the light to come on and not the other way around? It's because causes must precede effects. They cannot reach back in time. A consequence of this is that causes cannot propagate faster than the speed of light because moving faster than light means going backwards in time in some reference frames. Causes must precede effects in all reference frames.
Note that temporal precedence is necessary but not sufficient for one event to be considered the cause of another. Suppose I turn on one light, and then a minute later I turn on a second light. The first light coming on preceded the second (in all reference frames) but it is not the case that the first light coming on caused the second to come on. So causality involves something more than mere temporal precedence.
Figuring out exactly what that "something more" is turns out to be quite tricky. For example, we might hypothesize that the reason I cause both lights to come on and not that the first light causes the second is because I am an agent with free will and the light bulbs aren't. But this is easily disproven: suppose that I am in a room with two lights. One is on, the other is off. The bulb in the first light burns out and I am left in the dark. I fumble around for the switch to the second light and turn it on. Now I am in the middle of a causal chain that resulted in the second light coming on. In this case it is fair to say that the failure of the first light caused the illumination of the second, with me as an intermediate cause. And, of course, we can eliminate me as an intermediate cause by designing an automatic mechanism that turns on the second light when the first one fails.
Another possibility is that effects are "necessary consequences" of causes. In the situation where I turn on one light and then another, the activation of the second light is not a necessary consequence of the activation of the first. I could decide after turning on one light not to turn on the second one. On the other hand, having been turned on, a light cannot just "decide" stay off.
The situation gets a bit fuzzy in the case of the burned-out bulb because I could have decided to not turn on the backup light and just sit in the dark. Nonetheless, if I do decide to turn on the backup light, the fact that the first light burned out surely had a hand in that. It's not mere coincidence that I turned on the second light right after the first one failed. I turned it on at least in part because the first one failed, notwithstanding that I am (or at least feel like) an agent with free will.
Now let us consider a third scenario: suppose I am a puppet master controlling a marionette in a scene where the marionette activates a light switch. Consider two scenarios, one in which the switch that the marionette activates actually controls the light, and the second in which the switch that the marionette activates is a prop, and the real switch is located off-stage but is still activated by me. In both cases I'm the one who is controlling the light, either indirectly by pulling the marionette's strings, or directly by activating the off-stage switch. In the first case, the marionette is part of the causal chain that activates the light. In the second, it is not.
Now imagine that the marionette is not just a puppet, but is equipped with a sophisticated artificial brain capable of doing scientific reasoning. We have programmed the marionette to not be aware of the fact that we are pulling its strings. It might suspect this to be the case, but it has no access to any direct evidence. The marionette is effectively a Calvinist, and we are playing the role of God.
Now we walk the marionette through a series of experiments where it turns the light switch (the one on the stage) on and off. It observes a 100% correlation between the state of the switch and the state of the light, and also a time delay sufficient for the propagate of a causal effect from the switch to the light. Now we ask it: are you living in a world where your switch actually causes the light to come on, or are you living in a world where your switch is just a prop, and the light is actually controlled by a switch hidden off stage where you can never see it?
Randomness
Let's leave our marionette to ponder this question while we consider a second question: what does it mean for something to be "truly random"? Let me illustrate this with another familiar example: suppose we flip a coin. While the coin is spinning in the air the outcome (heads of tails) is unknown to us. Now we catch the coin and flip it over onto our wrist in the traditional manner. At this point the outcome is still unknown to us. Nevertheless, the coin is now in a fundamentally different kind of state than it was while it was spinning. Its state is still unknown to us, but it is determined. We may not know whether it is heads or tails, but we do know that either it is heads or it is tails. This was not the case while it was spinning. While it was spinning it was neither heads nor tails. It was spinning.Now, it is possible that there are two kinds of spinning states, one of which inevitably leads to the coin landing heads and the other of which inevitably leads to the coin landing tails. If this is the case, then it would be fair to say that the outcome was determined even before the coin actually landed, and so the spinning states are not fundamentally different from the landed-but-covered states. There is a spinning-but-going-to-land-heads state and a spinning-but-going-to-land-tails state. We may not be able to tell them apart, but that doesn't change the (hypothetical) fact that these two states exist. Our inability to tell them apart may simply be a technological limitation. If we had x-ray vision we would be able to distinguish the land-heads (but still covered) state from the landed-tails (but still covered state). Maybe if we had just the right kind of high speed camera and trajectory analysis software we could distinguish spinning-and-going-to-land-heads from spinning-and-going-to-land-tails.
There are two other possibilities: one is that the coin flip is truly random, which is to say, that there really is only one spinning state. Our inability to predict the outcome is not a technological limitation. Even if we had arbitrary super powers — indeed, even if we were God — we would not be able to predict the outcome of the experiment. The second possibility is that the outcome is not truly random, but the mechanism that determines the outcome is hidden from us. In this case we can't predict the outcome with any amount of technology, but God can.
Where all this matters is not flipping coins, but quantum mechanics. The outcomes of quantum mechanics experiments appear to be random, like coin flips. The question is: Is our inability to predict the outcome a technological limitation? Or is there hidden state that we can't access? Or is this really true randomness?
We can quickly dispense with the first possibility. The randomness of outcomes is a fundamental part of the quantum mechanical formalism, a direct logical consequence of the theory's mathematical structure, just as the constancy of the speed of light is a fundamental part of the mathematical structure of relativity. If a way were ever discovered to predict the outcome of a quantum experiment that would mean that quantum mechanics was completely, totally, utterly wrong. And quantum mechanics is one of the best confirmed scientific theories ever. No experiment has ever disagreed with its predictions. (Indeed, theoretical physicists consider this to be a serious problem because it leaves them with no guidance on how to make further progress!)
The fact that quantum outcomes really cannot be predicted is directly confirmed by experiment: the phenomenon of interference depends crucially on the unpredictability of quantum experiments. The presence or absence of interference is a direct reflection of whether or not the particle is in a predictable or unpredictable state.
So that leaves two possibilities: one is that there is hidden state. The second is that quantum randomness is really, truly random, and that quantum events are really truly "uncaused", and so quantum mechanics is a direct experimental refutation of the central premise of the Kalam cosmological argument.
For decades it was believed by physicists that this question could not possibly be resolved. Indeed, Einstein famously cited this apparent impossibility as evidence that there must be something wrong with quantum mechanics. The mere possibility that there could be true randomness (or even hidden state) bothered him to the point where he quipped that God does not play dice (to which Neils Bohr replied that Einstein should not tell God what He can and cannot do).
But it turns out that this question can be resolved. Not only that, but it can be resolved experimentally. The way to resolve the question was discovered by John Bell in 1964. The first experiment was conducted in 1972. The results have since been reproduced many, many times, and they are absolutely clear: quantum randomness is true randomness. There is no hidden state. If you want to know the details, I recommend David Mermin's excellent exposition (also available in this splendid book).
Non-local hidden state
Now, I have to give one caveat to this result. The Bell inequalities don't rule out all hidden state, they only rule out local hidden state, that is, hidden state that is physically located at the place where and time when the experiment is conducted. The results do not rule out the possibility of non-local hidden state, that is, state which not only do we not have access to, but which is located someplace other than where-and/or-when the experiment happens.Does this rescue the Cosmological argument? No, it doesn't. Why? Because eliminating the possibility of non-local hidden state is a logical impossibility. Why? Because the universe is finite. There are a finite number of particles, and there is only a finite amount of time between the Big Bang and the heat death of the universe. Therefore, in the entire lifetime of the universe we can only ever do a finite number of experiments. The outcomes of all those experiments can be written down as a finite string of bits, and those bits can be found somewhere in the expansion of (say) pi. So we cannot ever on the basis of any experiment rule out the possibility that the outcome of that experiment has been pre-determined by some cosmic Turing machine computing the digits of pi. So even if God exists and is pulling the quantum strings, we can never tell, at least not on the basis of the outcome of any experiment, not even an experiment that violates the predictions of QM!
I've pointed all this out to a number of Christians. They all responded (essentially) that if you cannot rule out the possibility of non-local hidden state, then you cannot rule out the possibility that it exists and is in fact God. Well, that's true. But it turns out that this apparent concession doesn't help the cause of theology at all, and in fact only makes things much, much worse. Explaining why will be the subject of the second post in this series.
Love the QM & religion stuff; looking forward to your part 2.
ReplyDeleteThat said, a minor quibble: You recommend Mermin's exposition, but he starts with "We now know that the moon is demonstrably not there when nobody looks." I hate that kind of misleading, layman "paradox" description of QM. OK, so Mermin was writing in 1981, so maybe I should give him a little more leeway. But that's a terrible, horrible, description of the consequences of QM.
The rest of the article, about the conflict and later resolution between EPR and Bell's inequality, is pretty neat. But Mermin seems to only consider either (local) hidden variables, or else the Copenhagen interpretation. He rightly (necessarily) rejects local hidden variables (because of Bell).
That doesn't mean that Copenhagen is right. That's a stupid interpretation too.
The moon's existence does not require humans to look at it.
Actually, the moon is demonstrably not there even when you do look. :-)
ReplyDeleteEfficient Cause
ReplyDelete@Ron
>it is simply not true that whatever begins to exist has a cause. There are at least two examples in nature of things that begin to exist without causes. Vacuum fluctuations are the spontaneous creation of particles and their associated anti-particles. Normally these just annihilate each other almost immediately after their creation, but in some circumstances they can create observable effects, so there is no question that they really do happen. The second example is radioactive decay, in which an atom of one element emits a particle and in the process becomes an atom of a different element. Both vacuum fluctuations and radioactive decay are random events. They have no cause. And yet they result in things beginning to exist.
1. Truly without cause?
Both virtual particles and radioactive decay take place within a larger structure.
Virtual particles originate from the quantum vacuum, which contains energy href="https://en.wikipedia.org/wiki/Vacuum_energy">(vacuum energy). Random fluctuations of this energy field create the virtual particles.
Radioactive decay comes from atomic nuclei. Radioactive decay happens when the atomic nucleus is rearranged into a lower energy state. This rearrangement has an activation energy, which is randomly supplied by quantum vacuum fluctuations.
Then there is De Broglie-Bohm Theory of quantum mechanics, which is deterministic.
2. Why can't random be a cause?
If some "thing" comes from sampling a random process, can't we say that "thing" is caused by the random process? So what if the underlying physical process is modeled with a a distribution?
Does smoking cause lung cancer? Does war cause death? Well, not always - each only increases the probability of the outcome. This brings us to Probabilistic causation.
> the universe is finite. There are a finite number of particles, and there is only a finite amount of time between the Big Bang and the heat death of the universe. Therefore, in the entire lifetime of the universe we can only ever do a finite number of experiments.
ReplyDeleteThese claims are not true according to our current best-fit model of the universe. Our current best-fit model is that the universe is infinite and contains an infinite number of particles. Also, the "heat death of the universe" is not a end state that happens at a finite time: it is something that the universe approaches asymptotically for an infinite time into the future. All of which combines to make it possible that an infinite number of experiments could be done over the lifetime of the universe.
@Peter:
ReplyDelete> Our current best-fit model is that the universe is infinite and contains an infinite number of particles.
Nope, at least not in the observable universe (which is all that matters for the question of doing experiments). The big bang happened a finite amount of time in the past (13.7 billion years) so the observable universe is of finite size. Nothing further than 13.7 billion light years away can influence us. For there to be a infinite number of particles the observable universe would have to be infinitely dense. Not only is the number of elementary particles finite, we can estimate it pretty accurately: it's about 10^80 or so. The number can get as high as 10^90 if you count neutrinos.
> the "heat death of the universe" is not a end state that happens at a finite time: it is something that the universe approaches asymptotically
Doesn't matter. At some point the temperature of the universe becomes sufficiently uniform that it is no longer possible to do any useful work without violating the second law of thermodynamics. That happens in a finite amount of time.
Look at it another way: to do an infinite number of experiments we would need an infinite amount of usable energy.
> at least not in the observable universe
ReplyDeleteBut the observable universe gets bigger as the universe gets older. If the universe will last for an infinite time, which is what our best current model says, then the observable universe's size could increase without bound.
There is a caveat to this, though. If the universe stays dark energy dominated, which our best current model says it will, then even though the "size" of our observable universe increases without bound, the *total amount of matter* in it will not, because accelerating expansion will move things away from us faster than the Hubble radius increases.
Even that is not sufficient to establish your point, though, because you don't need an infinite quantity of matter to do an infinite number of experiments. You only need an infinite amount of time. So the real question is whether there will be an infinite amount of time available. See below.
> The big bang happened a finite amount of time in the past (13.7 billion years) so the observable universe is of finite size. Nothing further than 13.7 billion light years away can influence us.
Actually, nothing further away than about 47 billion light-years, in the standard coordinates used in cosmology, can have influenced us up to this point. It's 47 billion, not 13.7 billion, because of the expansion of the universe; an object whose light is just reaching us at this instant has moved further away from us since it emitted the light.
> At some point the temperature of the universe becomes sufficiently uniform that it is no longer possible to do any useful work without violating the second law of thermodynamics. That happens in a finite amount of time.
This argument assumes that there is a finite lower bound to the amount of energy that must be expended to do useful work. That's not necessarily the case. Energy can be stored in quanta of radiation, for which there is no minimum energy, since radiation is massless (for a massive particle like an electron, its rest mass sets a finite lower bound to the energy it can contain). And radiation can do useful work regardless of how small the energy stored per quantum is. At least, that's what our current best theories, which assume that spacetime is a continuum, say.
The real theoretical issue here is whether spacetime itself is quantized at some very small scale like the Planck scale. If it is, then that might set a finite lower bound to the size of a quantum that can do useful work. But we won't know whether that's true until we have a good theory of quantum gravity. So the best we can say now is that it might turn out that your claim about only being able to do a finite number of experiments is true; but if it is, it won't be for quite the reasons you are giving.
> But the observable universe gets bigger as the universe gets older.
ReplyDeleteSure, but it doesn't acquire any additional mass/energy, nor does it acquire any negative entropy. And the fact that it's getting bigger only makes things worse because it means that the mass/energy that is already here is slipping irretrievably out of reach.
> There is a caveat to this, though. If the universe stays dark energy dominated, which our best current model says it will, then even though the "size" of our observable universe increases without bound, the *total amount of matter* in it will not, because accelerating expansion will move things away from us faster than the Hubble radius increases.
The value of the cosmological constant has nothing to do with the limit on experiments. Even if the universe were static or contracting that would not change anything. Mass/energy is conserved, and the second law holds. That's all you need.
> Actually, nothing further away than about 47 billion light-years
Yeah, yeah, whatever. It's still a finite number.
> This argument assumes that there is a finite lower bound to the amount of energy that must be expended to do useful work.
That's true, but that seems like a pretty safe assumption to me. The only condition under which it would not hold is if you could do science with purely non-dissipative (and hence reversible) processes. Among other things, that would mean that humans could not participate, or even be made aware of the results. And not just humans: *anything* that we would currently recognize as a life form or experimental device could not participate.
> The real theoretical issue here is whether spacetime itself is quantized at some very small scale like the Planck scale.
No, that's a red herring. The limit on experimentation is imposed by the second law of thermodynamics, not quantum mechanics. There would still be a limit even if physics were purely classical.
> it doesn't acquire any additional mass/energy
ReplyDeleteIf you are talking about the observable universe, yes, it can. Or it can lose it. The former will happen if the universe is radiation or matter dominated, because the Hubble radius will increase faster than the rate at which objects move apart due to expansion; the latter will happen if the universe is dark energy dominated, because the Hubble radius will increase slower than the rate at which objects move apart due to expansion.
> Mass/energy is conserved
Not globally in a curved spacetime. See here for a good layman's discussion by Sean Carroll:
http://www.preposterousuniverse.com/blog/2010/02/22/energy-is-not-conserved/
> that seems like a pretty safe assumption to me
It doesn't to me. It costs kTln2 energy to compute one bit of information. So there can only be a finite lower bound to this energy if there is a finite lower bound to T. But according to our best current model, T has no finite lower bound above absolute zero; the universe will continue to get colder and colder, never reaching a constant temperature (since it takes infinite time to reach absolute zero). So kTln2 will continue to decrease.
> The only condition under which it would not hold is if you could do science with purely non-dissipative (and hence reversible) processes.
The kTln2 lower limit assumes dissipative processes.
> we cannot ever on the basis of any experiment rule out the possibility that the outcome of that experiment has been pre-determined by some cosmic Turing machine computing the digits of pi.
ReplyDeleteBtw, since I've been pointing out what look to me like errors in some of the physics claims made in the argument in this article, I should also say that I still think the statement quoted above is true! Here's why: even if it is in principle possible to run an infinite number of experiments, it would take an infinite amount of time to do so. So there would never be any finite time (and any time we actually experience will be some finite time) at which we would be able to say with certainty that we had ruled out non-local state such as the cosmic Turing machine. So I think the essence of the above claim is still true, even if we have to tweak somewhat the exact physics statements.
"Causes must precede effects in all reference frames." It seems to me that causes must precede effects only iin the reference frame(s) that are constrain a given instance, namely, an instance (or frame(s)) in which the cause is observed to precede the effect,
ReplyDelete