Monday, March 20, 2017

Causality and Quantum Mechanics: a Cosmological Kalamity (Part 2 of 2)

This is the second in a two-part series of posts about the Kalam Cosmological Argument for the existence of God.  If you haven't read the first part you should probably do that first, notwithstanding that I'm going to start with a quick review.

To recap: the KCA is based on the central premise that "whatever begins to exist has a cause."  But quantum mechanics provides us with at least two examples of things that begin to exist without causes: radioactive decay results in the decay products beginning to exist, and vacuum fluctuations result in the virtual particles beginning to exist.  In the latter case, the particles are created literally from nothing, but that's just a detail, a little icing on the cosmological cake.  The KCA premise isn't about whether or not things that begin to exist are fashioned from previously existing materials.  It only speaks of causes.  And quantum events don't have causes — at least not local causes — as shown by Bell's theorem.  Bell's theorem actually does more than rule out local causes, it rules out all local hidden state models, not just causal ones.  So there are only two possibilities: either quantum events are not (locally) caused, or quantum mechanics is wrong.

But Bell's theorem does not rule out non-local hidden state, and so it does not rule out non-local causes.  Indeed, what Bell's theorem shows us is that quantum states are in general non-local: an entangled system is a system with a single quantum state spread out over multiple locations.  So could this be the source of quantum causality?

No, it couldn't.  Non-local causality is ruled out both by relativity and by the no-communication theorem.  For a non-local state to be causal, the causal effect would have to propagate faster than the speed of light, otherwise it would just be an ordinary run-of-the-mill everywhere-local chain of causation.  Some popular accounts of entanglement would have you believe that this (faster-than-light causality) does happen, but it doesn't.  Measuring one member of an entangled pair does not change the state of its partner.  So non-local quantum states cannot be causal.

There is one last possibility: maybe there is some other kind of state in the universe, some non-local non-quantum state.  As I noted at the end of part 1 we can never rule out this possibility on the basis of any experiment.  Indeed, we can even demonstrate a hypothetical non-local state that would account for all possible observational data: a cosmic Turing machine computing the digits of pi from wherever it happens to be that they correspond exactly with the outcomes of all experiments that have ever been done or will ever be done.  Assuming pi is normal, this will always be possible.

This is the fundamental problem with hidden state: it's hidden.  Our universe could be run by a cosmic Turing machine, or it could be a simulation built by intelligent aliens.  We can't eliminate either possibility, nor a myriad others, on the basis of experiment.

When I have pointed this out to Christians their response has been: what difference does it make if we're in a simulation?  The aliens were still created by God.  But in fact this possibility is devastating not just to the KCA, but to all theological arguments.  If the universe is a simulation, then I can accept all of the claims of theologians at face value, and still not get to God.  I can accept that Jesus was a real historical figure, that he really did perform miracles, that he really was crucified and rose from the dead, that he really did claim to be God, that the scriptures are the literal truth, that the Flood really happened, that the earth is 6000 years old.  I can accept all of that and still not believe in God because all of that could have just been built in to our simulation by the aliens who designed it.

I can even accept the cosmological argument and still not believe in any particular god.  I only have to accept the uncaused cause in the abstract.  I cannot possibly know anything about God's true nature because all of the information I have at my disposal is filtered through the intelligent aliens who built this simulated universe that I live in.  The information that I have access to may or may not reflect the actual metaphysical truth.  In fact, the aliens that built our universe may themselves not know the metaphysical truth because they themselves could be living in a simulated universe built by meta-aliens.

There could be an arbitrary number of layers of simulation between us and the uncaused cause.  For us to have accurate information about God, that information would have to somehow percolate down through all of those layers without being altered.  It would be like a cosmic game of Chinese whispers.  The odds of the truth emerging unscathed down here at the very bottom of the hierarchy are indistinguishable from zero.

It is worth noting that we may not be at the bottom of the hierarchy for long.  We are on the verge of being able to create simulated universes of our own.  When that happens, will the artificially intelligent inhabitants of that universe have souls?  Unless we are 100% certain that the answer to that question is yes, how can we be sure that we have souls?

In sum, the KCA is completely untenable.  Its central premise is refuted empirically by quantum mechanics.  Even if this were not the case, the KCA only gets you to some unknown uncaused cause.  The nature of the uncaused cause cannot be determined by any experiment, since no experiment can rule out the cosmic Turing machine.

Furthermore, the possibility of simulated worlds is devastating not just to the cosmological argument but to all religious arguments.  Even if you accept all religious claims at face value, you still have to either show that information about God necessarily propagates reliably into a simulation, or somehow prove that our universe is not a simulation, that we are living in the One True Universe, and that any simulations we create will be the first level down.  Otherwise, even in the face of miracles and revelations we cannot know if they are the work of God or the aliens who programmed our simulation.  Or, what is most likely of course, of our own ancestors' imaginations.

21 comments:

Publius said...

It's Turtles All The Way Down

@Ron
>This is the fundamental problem with hidden state: it's hidden. Our universe could be run by a cosmic Turing machine, or it could be a simulation built by intelligent aliens. We can't eliminate either possibility, nor a myriad others, on the basis of experiment.

At this point, you start to veer from the philosophical (even if that philosophy is a particular interpretation of quantum mechanics) to experimental proof.

One can't run a physics experiment to discover God.

The nature of God and the heavens is more like the The Fly of Despair.

Yet you always default to physics, which is queen of the hard sciences. What about the other sciences? Could archaeology prove the existence of God? How about sociology? Psychology? Biochemistry? Geology? Linguistics?

> If the universe is a simulation, then I can accept all of the claims of theologians at face value, and still not get to God.. . . I can accept all of that and still not believe in God because all of that could have just been built in to our simulation by the aliens who designed it.

>I can even accept the cosmological argument and still not believe in any particular god. I only have to accept the uncaused cause in the abstract. I cannot possibly know anything about God's true nature because all of the information I have at my disposal is filtered through the intelligent aliens who built this simulated universe that I live in. The information that I have access to may or may not reflect the actual metaphysical truth. In fact, the aliens that built our universe may themselves not know the metaphysical truth because they themselves could be living in a simulated universe built by meta-aliens.

It's Turtles all the way down.

Yet, once again, we observer that everything we need to know can be found in Star Trek.

Odo: Has it ever occurred to you that you believe the Founders are gods because that's what they want you to believe? That they built that into your genetic code?

Weyoun 6: Of course they did. That's what gods do. After all, why be a god if there's no one to worship you?

From Star Trek Deep Space Nine: Treachery, Faith and the Great River.

>Furthermore, the possibility of simulated worlds is devastating not just to the cosmological argument but to all religious arguments. Even if you accept all religious claims at face value, you still have to either show that information about God necessarily propagates reliably into a simulation, or somehow prove that our universe is not a simulation,

Really? One has to prove that the universe isn't a simulation? I don't see it as a serious hypothesis; it's science fiction. If you believe it, though, watch out for Roko's basilisk.

>Otherwise, even in the face of miracles and revelations we cannot know if they are the work of God or the aliens who programmed our simulation. Or, what is most likely of course, of our own ancestors' imaginations.

It's such a better story to believe that all of the mass and energy of the universe was compressed into a single particle the size of a quark, which spontaneously exploded, thereby creating the universe.

Ron said...

> The nature of God and the heavens is more like the The Fly of Despair.

I don't know if that link was a mistake or not, but if not, then things like that make it very hard for me to take you seriously.

> If some "thing" comes from sampling a random process, can't we say that "thing" is caused by the random process?

You can say anything that you want. You can say that furious green ideas sleep furiously. Watch: furious green ideas sleep furiously. Whether what you say is true (or even meaningful is another matter.

Saying that an event is caused by "the random process" or even "a random process" is the *same thing* as saying that the event has no cause. There is no local state that correlates with the event. That is the *definition* of not having a cause.

> One can't run a physics experiment to discover God.

Oh? Then how does one distinguish God from things that don't exist? Because in science, not being able to run an experiment to demonstrate the existence of something is the *definition* of not existing.

(Maybe you should re-read 31 flavors of ontology.)

> What about the other sciences? Could archaeology prove the existence of God? How about sociology? Psychology? Biochemistry? Geology? Linguistics?

All of these things are subservient to physics because they study the behavior o things that are made of atoms, and atoms behave according to physics.

> It's such a better story to believe that all of the mass and energy of the universe was compressed into a single particle the size of a quark, which spontaneously exploded, thereby creating the universe.

Better compared to what? That the universe was created by a jealous deity who is so insecure that he will condemn sentient beings to eternal suffering if they don't worship him? And that this sad state of affairs has come about because one of our ancestors was snookered by a talking snake into eating a piece of fruit?

Damn straight the big bang is a better story than *that*.

Publius said...

Rogue Random

@Ron:
>Saying that an event is caused by "the random process" or even "a random process" is the *same thing* as saying that the event has no cause. There is no local state that correlates with the event. That is the *definition* of not having a cause.

This seems curiously ill-defined. Given the above, the two examples you give - virtual particles and radioactive decay - have causes. The time between virutal particle creation/destruction or radioactive decay events may be unpredictable - random - but once it's happened, we have models for the cause.

Yet staying with your model for a moment, would these phenomena also have no cause?
1) Rogue waves? If you object to ocean waves, then how about Peregine solitons?
2) Winning lottery numbers?

Random Numbers

Consider that, for any physical phenomena, we could take measurements from it and use those measurements to generate sequences of random numbers.

What makes a set of numbers "random"?
1. The numbers are unpredictable; there is no fixed rule governing their selection.
2. The numbers are independent, or uncorrelated.
3. They are unbiased, or uniformly distributed.

Does the black box have a causal process?

Say I present you with two boxes, a black one and a red one.
A. The red box produces a continuing sequence of random numbers, which are based on measurements from radioactive decay.
B. The black box also produces a continuing sequence of random numbers.

After any length of examination, the sequences from the red box and the black box pass all the tests of numerical randomness.

Can you then conclude that the process inside the black box is from a physical process with "no cause", like radioactive decay?

>Oh? Then how does one distinguish God from things that don't exist? Because in science, not being able to run an experiment to demonstrate the existence of something is the *definition* of not existing.

Well, then you will have quite a problem with cognitive dissonance!

Does the Universe not exist? You can experiment in the observable universe, but you cannot know anything by direct experimentation of the volume of the universe beyond that.

You also can't run an experiment to determine if I was (or wasn't) thinking of elephants last week.

Wait - I'm thinking of a number. Hah, you can't guess it with certainty! Yet it exists! It's between 0 and 4. Hah, you still can't do it!

>> What about the other sciences? Could archaeology prove the existence of God? How about sociology? Psychology? Biochemistry? Geology? Linguistics?

>All of these things are subservient to physics because they study the behavior o things that are made of atoms, and atoms behave according to physics.

Full reductionism then?

I look forward to your physics description of traffic waves (see also here), Charles Bonnet Syndrome, and the use of stone tools.

>That the universe was created by a jealous deity who is so insecure that he will condemn sentient beings to eternal suffering if they don't worship him? And that this sad state of affairs has come about because one of our ancestors was snookered by a talking snake into eating a piece of fruit?

Still creating unskilled religions.

Yet the story of the big bang requires the inclusion of inflation, which can't be falsified. What would Karl Popper say?

Ron said...

@Publius:

(Sorry for the long delay in responding. I've been on the road.)

> once it's happened, we have models for the cause.

Really? What is it? (Note that a demonstrably correct answer to that question will win you a Nobel prize in physics.)

> would these phenomena also have no cause?

No. All classical phenomena have causes.

Quantum physics is fundamentally different from classical physics. It all hinges on Bell's theorem, which only applies to quantum systems. If you don't understand Bell's theorem you cannot possibly understand my argument.

> You also can't run an experiment to determine if I was (or wasn't) thinking of elephants last week.

That's not true. For example, I can ask you: were you thinking of elephants last week? That experiment may not produce reliable results, but then again it might. There's nothing in the laws of physics that *prevents* that experiment from producing reliable results.

By way of contrast, the laws of physics *prohibit* any experiment that allows you to predict when a particular radioactive decay will occur. (See Bell's theorem.)

> inflation, which can't be falsified

Why do you think inflation can't be falsified? Of course it can. Large-scale anisotropy in the cosmic background radiation would falsify it.

Publius said...

Random != Uncaused


> once it's happened, we have models for the cause.

>Really? What is it? (Note that a demonstrably correct answer to that question will win you a Nobel prize in physics.)

Consider alpha decay. When we observe an alpha particle, we know that it tunneled through the coulomb barrier of its host nucleus. Is this not a description of the "cause" of that alpha particle?

>By way of contrast, the laws of physics *prohibit* any experiment that allows you to predict when a particular radioactive decay will occur. (See Bell's theorem.)

That would be prediction, not causation. Time is the axis by which we order events that we seek to associate with cause and effect.

I also can't predict when 23 36 51 53 15 will be the winning sequence for the Powerball lottery. That does not mean that the winning powerball sequences are uncaused.

In the realm of prediction (which is not causation), we have models for radioactive decay.

Consider alpha decay of Radium-226 (Z=88). After decay, the daughter nucleus will be Radon-222 (Z=86).

The alpha particle has a Bohr radius of about 7.25 fm, with an energy of about 0.0993 MeV. The radius of the radon nucleus is about 7.27 fm. Therefore, the probability for alpha decay, with an alpha particle energy of 5 MeV, is:
p = exp[-4*pi*86*sqrt(0.0993/5) + 8*sqrt(86*(7.27/7.235))]
= exp[-78.00]
= 1.32E-34

This is per collision of the alpha particle with the nuclear barrier.
The alpha particle will collide about 10E+21 times per second.
The predicted decay rate is therefore about 1E-13 decays/second. This reasonably agrees with the observed value of 1.4E-11 decays/sec.

... or you could calculate it with the deBroglie-Bohm theory.

It appears that alpha decay is well understood in terms of the atomic rearrangement due to the emission of an alpha particle and good models of the probability of it happening. No sight of an uncaused cause.

>> inflation, which can't be falsified

>Why do you think inflation can't be falsified? Of course it can. Large-scale anisotropy in the cosmic background radiation would falsify it.

Ack, "inflation" isn't specific enough - the theory has evolved over time. It can, perhpas, be separated into classic inflation and postmodern inflation. Classic inflation certainly has a lot of problems, and has, hence been abandoned. Hence postmodern inflation. Yet it is turning into a <a href="https://goo.gl/9RTtn2>Theory of Anything.</a>

Publius said...

Theory of Anything.

Ron said...

@Peter:

> > Mass/energy is conserved

> Not globally in a curved spacetime.

I debated with myself when I wrote that whether to say "momenergy is conserved" rather than mass/energy, but decided that would be too pedantic. It's hard to tell how much physics a commenter knows, and my blog is targeted towards a general audience.

Yes, you're right. Mass/energy is not conserved, momenergy is the conserved quantity. But these details don't matter. What matters is that there is some conserved quantity in this universe, and that, plus the second law, puts a limit on how much computation you can do.

> It costs kTln2 energy to compute one bit of information.

No, it costs *at least* that much. The Landauer limit is a lower bound. There is no guarantee that you can actually *do* a computation for that cost.

The kTln2 Landauer limit is a simple thermodynamic limit: in order to compute you need to be able to discharge heat somewhere, and the colder your cold reservoir is the more efficient you can be. This is no different from the classical calculation of the efficiency of a heat engine. But to do useful work you not only need a cold reservoir to discharge heat into, you also need a hot reservoir from which to extract energy. You need both the air conditioner *and* the data center. If the entire universe is the same temperature you can't compute no matter how cold it is.

> So I think the essence of the above claim is still true, even if we have to tweak somewhat the exact physics statements.

Well, I'm glad we can agree about that!

@Publius:

> Consider alpha decay. When we observe an alpha particle, we know that it tunneled through the coulomb barrier of its host nucleus. Is this not a description of the "cause" of that alpha particle?

It begs the question: what caused the alpha particle to tunnel?

Go back to the case of a light bulb turning on: the bulb beginning to emit light was caused by electric current beginning to flow through it. That in turn was caused by the switch closing. That in turn was caused by my hand exerting a force on the switch. That in turn was caused by my brain sending electric impulses to my muscles. That in turn was caused by some computational process in my brain that we don't yet fully understand, but if we understood it we could trace the causal chain back to some input into my brain through my sensory systems, and back out into the world. In a purely classical world we could trace that causal chain back to the beginning of time.

In the quantum world, the causal chain stops immediately before the decay process begins. This is not mere speculation or an expression of ignorance or technological limitations. In QM the causal chain of a quantum event stops at that event. That's why radioactive decay is used as the initiating event in the Schroedinger's cat thought experiment.

> That would be prediction, not causation.

Causation is a pre-requisite for prediction. If you can reliably predict, then there must be a cause which serves as the basis for prediction. The absence of the ability to predict does not necessarily imply the absence of causation. It could be a technological limitation. It is often hard to know whether the inability to predict is a technological limitation or an indication of the absence of a cause. It might be the case that with powerful enough measuring devices and computers you *could* predict the powerball outcomes. You actually *can* predict e.g. Roulette:

https://phys.org/news/2012-10-chaos-theory-outcome-roulette-table.html

For a long time it was controversial whether the inability to predict radioactive decay was due to the absence of a cause or some other factor. But thanks to Bell's theorem we know that it is the former. I keep telling you this same thing again and again, and you keep ignoring it.

Publius said...

Not Getting It


@Ron
>For a long time it was controversial whether the inability to predict radioactive decay was due to the absence of a cause or some other factor. But thanks to Bell's theorem we know that it is the former. I keep telling you this same thing again and again, and you keep ignoring it.

>In the quantum world, the causal chain stops immediately before the decay process begins. This is not mere speculation or an expression of ignorance or technological limitations. In QM the causal chain of a quantum event stops at that event. That's why radioactive decay is used as the initiating event in the Schroedinger's cat thought experiment.

From the main post (part 2):
> But quantum mechanics provides us with at least two examples of things that begin to exist without causes: radioactive decay results in the decay products beginning to exist, and vacuum fluctuations result in the virtual particles beginning to exist. In the latter case, the particles are created literally from nothing, but that's just a detail, a little icing on the cosmological cake. The KCA premise isn't about whether or not things that begin to exist are fashioned from previously existing materials. It only speaks of causes. And quantum events don't have causes — at least not local causes — as shown by Bell's theorem.

I'm not getting it.
I've copied above where you've asserted 3 times that quantum events don't have a cause.

For each example - radioactive decay and virtual particles - it appears to me that physicists have come up with models that do provide a cause. Alpha decay has models with the nuclear force, electromagnetic forces, the coloumb barrier, etc., that appear to me to provide the cause.

You seem to key in on the random occurence in time -- such as why did the decay happen at time t and not time t+1, and in fact, if we record the times of decays, tn, we see a random distribution.

Yet time isn't the event. Time is the marker of the event - the value by which we order the events.

In addition, if one creates statistical models, with particles as random variables, then it should not be surprising that one gets randomness out of the model. To quote Werner Heisenberg: "The incomplete knowledge of a system must be an essential part of every formulation in quantum theory. Quantum theoretical laws must be of a statistical kind. To give an example: we know that the radium atom emits alpha-radiation. Quantum theory can give us an indication of the probability that the alpha-particle will leave the nucleus in unit time, but it cannot predict at what precise point in time the emission will occur, for this is uncertain in principle."

Finally, there are other formulations of quantum mechanics - such as de Broglie Bohm theory (pilot waves), which are deterministic. See Quantum mechanics, randomness, and deterministic reality. Pilot wave theory also has a sensible resolution to the two-slit experiment and tunelling. [note]

Ron said...

> You seem to key in on the random occurence in time -- such as why did the decay happen at time t and not time t+1, and in fact, if we record the times of decays, tn, we see a random distribution.

Yes, that is exactly right.

> Yet time isn't the event. Time is the marker of the event - the value by which we order the events.

Yes, that is also right. So?

> Quantum theoretical laws must be of a statistical kind.

Yes. That is exactly the same thing as saying that individual quantum events do not have causes. (Note BTW that when Heisenberg said this it was a controversial statement. Einstein, for example, didn't believe it to be true, insisting that this was an indication of QM being an incomplete theory. The experimental vindication of Bell's theorem shows that Einstein was wrong and Heisenberg was right.)

> Finally, there are other formulations of quantum mechanics - such as de Broglie Bohm theory (pilot waves), which are deterministic.

Yes, but they are non-local. Non-local theories only get you to the cosmic TM, not to the uncaused cause.

Ron said...

Oh, one more thing:

> Quantum theoretical laws must be of a statistical kind.

The operative word there is "must". There are many things that *can* be modeled statistically (weather, the stock market) but nonetheless (almost certainly) have causes. Our inability to make exact predictions of weather and the stock market are not fundamental physical limitations, but rather a consequence of our limited ability to collect enough data and do the math.

That is not the case in QM. Our inability to predict individual quantum events really is a fundamental limitation of physics. No amount of mathematical cleverness or technological advance can ever get around it.

Luke said...

> Publius: What about the other sciences? Could archaeology prove the existence of God? How about sociology? Psychology? Biochemistry? Geology? Linguistics?

> Ron: All of these things are subservient to physics because they study the behavior o things that are made of atoms, and atoms behave according to physics.

I've thought long and hard about this, and I don't understand why you are so confident that this is the case. There is a terrific amount of reality that is unexplained, and yet you are exceedingly confident about causal reductionism—that no causal powers can exist at higher levels than the most fundamental, and be even slightly independent from the causes at the most fundamental level. Sean Carroll advocates this position in Downward Causation (the article title is name of a position he rejects).

There is a very simple physical system in existence which can help us doubt causal reductionism: unstable Lagrangian points. They are places where the forces of gravity cancel, such that a spacecraft passing through them in the right way can develop radically different trajectories with an infinitesimal force. I mean that in the precise mathematical sense: not just tiny, but infinitesimal. This idea isn't academic; the Interplanetary Transport Network is a system for getting space vehicles to arbitrary points in the solar system with very little fuel expended.

Now, what happens when other systems pass through the equivalent of unstable Lagrangian points, when more forces cancel? How about all of the forces known to current physics? Either this never happens, or it does. Such systems would be sensitive to others kinds of forces (including infinitesimal ones)—if they exist. Now, where might we find such a system? Natural language, of course. Noam Chomsky explains:

>> Specifically, Descartes speculated that the workings of res cogitans—second substance—may be beyond human understanding. So he thought, quoting him again, "We may not have intelligence enough to understanding the workings of mind." In particular, the normal use of language, one of his main concepts. He recognized that the normal use of language has what has come to be called a creative aspect; every human being but no beast or machine has this capacity to use language in ways that are appropriate to situations but not caused by them—this is a crucial difference. And to formulate and express thoughts that may be entirely new and do so without bound, may be incited or inclined to speak in certain ways by internal and external circumstances, but not compelled to do so. That's the way his followers put the matter—which was a mystery to Descartes and remains a mystery to us. That quite clearly is a fact. (Noam Chomsky - "The machine, the ghost, and the limits of understanding", 9:58)

So, is your confidence warranted, Ron?

Ron said...

> Ron: All of these things are subservient to physics because they study the behavior o things that are made of atoms, and atoms behave according to physics.

> Luke: I've thought long and hard about this, and I don't understand why you are so confident that this is the case.

Quite simply: evidence. (Could you not have predicted that answer?) All over the world physicists are working feverishly to devise an experiment that will show an atom not behaving in accordance with the currently known laws of physics. They are doing this because such an experiment would be the most significant progress in physics in decades. So far they have failed spectacularly.

Now, of course this does not mean that they will continue to fail forever. In fact, I'm pretty confident that sooner or later they will succeed. I believe that GR and QM can and will be unified, and when that happens I believe that the result will be surprising in much the same way that GR and QM themselves were surprising in their days. Of course, I have no way of knowing what the nature of that surprise will be. If I did, it wouldn't be a surprise.

So why am I so confident that Jesus won't be part of the surprise? Because the fact that it will be a surprise in and of itself falsifies the Christian theory. If God wanted me to believe in Him it would be very simple for Him to arrange it: all He would have to do is show me the evidence. The fact that He has chosen not to reveal Himself to me shows that he is either not omniscient, not omnipotent, or not loving (or, what seems to me to be overwhelmingly more likely, not existent).

There might be a lower-case-g god lurking at the end of the long and winding road to physical truth. But if there is he will be more like Loki than Jesus. (Actually, my Bayesian prior on the universe being a simulation built by intelligent aliens is not zero.)

> Descartes speculated

No one who wrote before 1936 can be taken seriously on this matter. The question is not a metaphysical one any more since Turing discovered universal computation. Either our brains are Turing machines, or they are not. If they are not, then there should be *evidence* that they are not, and that too would be a major breakthrough.

Some people have advanced arguments that our brains are not TMs but I don't find any of them convincing. If you want to raise one of those arguments I'd be happy to explain why. But anyone who talks about this without mentioning Turing does not deserve to be taken seriously. Chomsky, of all people, really ought to know better. (The quote you cite is from an address he gave at the Vatican, so maybe he thought he needed to dumb it down for the pope?)

> Lagrangian points

This is a red herring. Lagrange points are not counterexamples to the laws of physics, they are *consequences* of (and hence completely compatible with) the laws of physics.

The laws of physics are wondrous and lead to all manner of surprising places. They lead to airplanes and iPhones and antibiotics and the internet and surely many more wonders than are dreamt of in my philosophy. But AFAICT they do not lead to Jesus.

Luke said...

@Ron 1/3

> Quite simply: evidence. (Could you not have predicted that answer?) All over the world physicists are working feverishly to devise an experiment that will show an atom not behaving in accordance with the currently known laws of physics. They are doing this because such an experiment would be the most significant progress in physics in decades. So far they have failed spectacularly.

You've presupposed that all causation is on the atomic scale, observable at the atomic scale, in saying this. But there's a lot of possible order which we know we cannot observe in ways that violate Heisenberg's unsharpness relation. Do you have a formal proof that any and all such possible order would necessarily show up in the domain you've just described? And do you have a formal proof that the first place that such possible order would show up—given the all the different unsolved problems humans face and given all the current ways we have of investigating reality—is the atomic domain?

> So why am I so confident that Jesus won't be part of the surprise? Because the fact that it will be a surprise in and of itself falsifies the Christian theory. If God wanted me to believe in Him it would be very simple for Him to arrange it: all He would have to do is show me the evidence. The fact that He has chosen not to reveal Himself to me shows that he is either not omniscient, not omnipotent, or not loving (or, what seems to me to be overwhelmingly more likely, not existent).

Christian theory explicitly predicts that those who think they're super smart and super wise will indeed be fooled:

>> For the word of the cross is folly to those who are perishing, but to us who are being saved it is the power of God. For it is written,
>>     “I will destroy the wisdom of the wise,
>>         and the discernment of the discerning I will thwart.”
>> Where is the one who is wise? Where is the scribe? Where is the debater of this age? Has not God made foolish the wisdom of the world? For since, in the wisdom of God, the world did not know God through wisdom, it pleased God through the folly of what we preach to save those who believe. (1 Corinthians 1:18–21)

One way I believe this happens is by the denial of any ontological status to agent causation. The instant you embrace causal reductionism—to impersonal (value-free) causation—you cannot have true agents. You have simulacra of agents. Of course, what really happens is that people deploy two entirely different rationalities—one which treats agent causation as ontologically real, while the other treats it as "just an approximation". The problem is that in theory-land, the latter rationality is getting much more attention, which means that the former is atrophying. Want evidence? Look at how much of a loss you and others are about what to do with this world full of agents who won't act according to how you define "rationality". Maybe there's nothing you can do. But maybe your "rationality" is impoverished.

Luke said...

@Ron 2/3

> No one who wrote before 1936 can be taken seriously on this matter. The question is not a metaphysical one any more since Turing discovered universal computation. Either our brains are Turing machines, or they are not. If they are not, then there should be *evidence* that they are not, and that too would be a major breakthrough.

Noam Chomsky gave that lecture well after 1936, and he is rather acquainted with Turing machines. What he was saying is that the problem Descartes saw still exists. This should not be surprising, as there are great similarities between the mechanical philosophy—which Descartes and Newton helped bring into existence—and Turing computation. Do you think the problem does not exist?

As to your confidence that thought is implemented by something which is at most Turing-powerful, I've thought about that a bit as well. It seems to me that confidence in it ought to be aided by its successes in explanation, and diminished by its failures. So for example, we could examine Hubert Dreyfus's views on artificial intelligence:

>> Dreyfus argued that human intelligence and expertise depend primarily on unconscious instincts rather than conscious symbolic manipulation, and that these unconscious skills could never be captured in formal rules. His critique was based on the insights of modern continental philosophers such as Merleau-Ponty and Heidegger, and was directed at the first wave of AI research which used high level formal symbols to represent reality and tried to reduce intelligence to symbol manipulation.

You probably know more about this than I do; do you agree with the Wiki article that Dreyfus was correct in criticizing GOFAI? I don't want to claim that the only way that the mind could be implemented via Turing computation is via GOFAI, but it seems like this ought to be a strike against the idea that we can have high confidence that thought is implemented by something at most Turing-powerful.

Recall that explanations ought not merely make us content with thinking we understand reality; they should be fruitful and lead to deeper understanding. It is not at all clear to me that modeling all human cognition after Turing computation is being thusly fruitful. Am I incorrect—are we making major advances? If not, then I question how you can be so confident. BTW, I talked to a Stanford psychologist-in-training a few years ago and asked him whether the "computer model of the mind" was still going strong in academic psychology. He said, "Thankfully not!"


P.S. It's not clear to me that the halting problem can be stated within the formalism of Turing computation. Can a Turing machine fully define a question which it cannot then answer? Or is the full definition—for a Turing machine—equivalent to an algorithm which properly executes?

Luke said...

@Ron 3/3

> This is a red herring. Lagrange points are not counterexamples to the laws of physics, they are *consequences* of (and hence completely compatible with) the laws of physics.

Ron, did I state or entail that Lagrangian points are counterexamples to the laws of physics? If you look at precisely what I said, you will see that I did no such thing. Instead, what I did was show that there are easily imaginable physical configurations where the future trajectory is not predictable by the present state + the present [fundamental!] laws of physics. Do you disagree that I in fact showed this? You are of course welcome to argue that in such situations, there exists no infinitesimal force and it's just pure randomness which "selects" which trajectory is taken. I would then ask how we could try and set things up so that patterns could possibly emerge from such situations. Maybe there would be no patterns, but maybe there would.

By the way, I claim I'm following in the footsteps of Galileo, here:

>>     In his De motu, an earlier treatise that he wrote while he was still a young professor of mathematics in Pisa, Galileo recognized two factors in the free fall of bodies—gravity and impetus.[43] His explanation resembles that of Hipparchus in Antiquity: when a body is thrown upwards, it overcomes gravity by virtue of the force (impetus) given to it by whomever or whatever projected it. This force wears out gradually; when it equals the force of gravity the body changes its course, and even when it falls, it has still enough of the initial impetus left in it to slow down its motion downwards, which would otherwise be faster. The more the impetus continues to wear out, the more the body accelerates its downwards motion. If the impetus were to wear out before the body hits the ground—but it does not—the body would continue in its fall with uniform velocity. Hipparchus did not consider that case, which Galileo admits to be an imaginary condition. Galileo also ascribed, in a hydrostatic analogy, a capacity of bodies to receive impetus that is proportional to their specific gravity; he already insisted that bodies of different weight but of the same specific gravity fall with equal velocities. (Theology and the Scientific Imagination from the Middle Ages to the Seventeenth Century, 175)

Galileo admitted that the condition he lated brought into existence with experiment, was originally imaginary—located only within the imagination. His discovery was that if you cancel two competing forces—gravity and the normal force—you can still get motion. It was the impetus force vanishing which led to a crucial insight.

Now, why do you not wish to consider whether something interesting could happen at unstable Lagrangian points, which is not just "randomness"? I never said anything would happen that is incompatible with the laws of physics; what I suggested is that the laws of physics might not be able to describe all of the resultant order. And yet, that seems to be precisely your stance—that any order which exists can ultimately be traced back to the fundamental laws of nature churning on some big state vector, perhaps with some noise added.

> The laws of physics are wondrous and lead to all manner of surprising places. They lead to airplanes and iPhones and antibiotics and the internet and surely many more wonders than are dreamt of in my philosophy. But AFAICT they do not lead to Jesus.

Perhaps what I wrote plus the context of this article made you think I would advance such a position? On the contrary, I think that the laws of physics do not explain all the order that is, and in particular, precisely exclude agent causation.

Ron said...

> You've presupposed that all causation is on the atomic scale

No, I have not presupposed it. There is overwhelming evidence that this is true. Physics leads to chemistry. Chemistry leads to biology. Biology leads to brains. Brains lead to all manner of interesting complexity. But nowhere do I see any evidence that there is anything going on that cannot be accounted for by the known laws of physics, notwithstanding that science has not yet filled in every last detail.

> The instant you embrace causal reductionism—to impersonal (value-free) causation—you cannot have true agents.

That depends on what you mean by "true agents." Our existence as classical physical objects is already an approximation to the truth, but a tremendously useful one (in fact, a *necessary* one). Agency is likewise an approximation to the truth, but one that is sufficiently useful to warrant suspension of disbelief. I don't see the problem.

> Noam Chomsky gave that lecture well after 1936, and he is rather acquainted with Turing machines.

Yes, I know. Why he neglected to mention them is a mystery. He's still alive, we could ask him.

> What he was saying is that the problem Descartes saw still exists.

I know that's what he was saying. I'm saying he's wrong.

> Dreyfus

Dreyfus is wrong too. Yes, symbolic AI has been a dismal failure, but that was just our first crack at it. It took many false starts before we figured out how to fly, and yet we eventually succeeded.

We've only been seriously tackling the AI problem for about 50 years. That's not a long time, and we are making progress. It took 120 years to get from the Montgolfier brothers to the Wright brothers, and AI is a much harder problem than flight. In fact, if you think about it, AI is almost by definition the last technological problem humans will ever have to solve, so it's not too surprising that it's taking a bit longer than initially thought.

So talk to me again in 100 years. If we haven't cracked it by then I'll rethink my position. ;-)

> P.S. It's not clear to me that the halting problem can be stated within the formalism of Turing computation. Can a Turing machine fully define a question which it cannot then answer?

You're confusing two different issues. A TM cannot solve the halting problem, but a TM can prove that the halting problem cannot be solved. (Obviously, because if this were not the case then Alan Turing's proof of the Halting problem would be proof that Alan Turing could do something that a TM could not!)

If you could somehow show that a human can solve the halting problem (again, not to be confused with proving that the halting problem can't be solved) that would be proof that humans are not TMs. But think about this: what would such a demonstration even look like? Remember, it's not enough to show that human can solve particular instances of the halting problem. Computers can do that too. You'd have to show that humans (or some human) can do it *in general*.

> Ron, did I state or entail that Lagrangian points are counterexamples to the laws of physics?

You raised it as an example in an argument whose thesis was:

"There is a very simple physical system in existence which can help us doubt causal reductionism"

What does it mean to "doubt causal reductionism" if not that the whole exhibits behavior which cannot be accounted for by the laws that govern its parts?

> what I suggested is that the laws of physics might not be able to describe all of the resultant order.

Yes, I understand that.

> I think that the laws of physics do not explain all the order that is,

Yes, I understand that too.

> and in particular, precisely exclude agent causation.

Then why bring up Lagrange points? Do you think Lagrange points have agency?

Luke said...

It is going to take me some time to get used to your extrapolation out to 100 years in the future with current ways of thinking. Perhaps we could tease out just how this works in person at some point. For example: whenever some fundamental equation of physics is tested empirically, is that equation tested directly or are the computational costs too high to do use anything other than an approximation? When it is the latter, it seems that the entity tested is not that fundamental equation, but all equations which would yield that approximation. Furthermore: "If all you have is a hammer, everything looks like a nail."

As to the Noam Chomsky thing, let's formulate a question for him. Care to take a whack at it? His stance seems to be something like, "We have no idea how humans use natural language in its creative aspect." It seems to me that if you're going to use Turing machines to help explain here, you ought to be able to say something about this question by using the TM formalism.

As to the Halting problem, I'm saying that a TM does not seem to be able to define the concept of "eventually halts", while we humans can. The reason is that "eventually halts" involves the concept of infinity, which TMs cannot (to my knowledge) represent. Note that I'm separating out definition from computation; we humans seem to be able to do this, but can TMs?

> What does it mean to "doubt causal reductionism" if not that the whole exhibits behavior which cannot be accounted for by the laws that govern its parts?

If the current laws of physics do not describe all resultant order, then to suggest that there is something in that order they do not explain is not to find a counterexample to them. I would not be calling the laws of physics wrong, merely incomplete. I suppose the one way this would not be true is if the only way to get more order is to screw with predicted probability distributions, but (i) I'm not sure that is the case; (ii) maybe those probability distributions don't hold in domains radically different from where they've been observed.

> Then why bring up Lagrange points? Do you think Lagrange points have agency?

No, I suspect that Langrangian points may be required to permit agency to exist. Did you not see the connection between my bit on unstable Lagrangian points and the creative use of natural language, which is conditioned by circumstance but not determined by circumstance?

Ron said...

> For example: whenever some fundamental equation of physics is tested empirically, is that equation tested directly or are the computational costs too high to do use anything other than an approximation?

The predictions are generally numerical approximations, but the approximations are generally better than the error bars on the physical measurements. Why do you think that matters?

> As to the Noam Chomsky thing, let's formulate a question for him. Care to take a whack at it?

Sure: Dear Mr. Chomsky, in your address to the vatican in January 2014 you focus your attention on Descartes, Locke, Hume, etc. but make no mention of Turing. Why? You of all people cannot possibly be unaware of universal computation and the Church-Turing thesis, and it begs credulity even to imagine that you think these developments have so little bearing on the question of human language processing as to not even be worth mentioning. So why didn't you mention them? (And as long as I'm asking, what is your position on the Church-Turing thesis? Do you believe that human brains are Turing machines, and if not, what is the evidence for it?)

> a TM does not seem to be able to define the concept of "eventually halts" [because] "eventually halts" involves the concept of infinity

No, it doesn't. A TM has a finite number of internal states (not to be confused with the state of its tape), some of which are designated as halting states. Halting means reaching one of those finite number of designated states in a finite amount of time. No infinities required.

> we humans seem to be able to do this, but can TMs?

Yes, of course they can.

https://www.wolframalpha.com/input/?i=infinity+plus+one

> I would not be calling the laws of physics wrong, merely incomplete.

OK, but "we don't know how to compute it" is not evidence for the incompleteness of the known laws of physics. It could just be a result of our temporary inability to do and/or understand the math. Distinguishing between these two situations is very hard in general. Look at the original EPR paper. It took decades to sort that out (and the process of convincing people that it *has* been sorted out is an ongoing battle!)

> I suspect that Langrangian points may be required to permit agency to exist. Did you not see the connection between my bit on unstable Lagrangian points and the creative use of natural language, which is conditioned by circumstance but not determined by circumstance?

I did. How is this any different from standard chaos theory?

Luke said...

> The predictions are generally numerical approximations, but the approximations are generally better than the error bars on the physical measurements. Why do you think that matters?

Are you aware of the incredible computational costs to actually model using the fundamental equations themselves?

> Sure: Dear Mr. Chomsky, in your address to the vatican in January 2014 you focus your attention on Descartes, Locke, Hume, etc. but make no mention of Turing. Why? You of all people cannot possibly be unaware of universal computation and the Church-Turing thesis, and it begs credulity even to imagine that you think these developments have so little bearing on the question of human language processing as to not even be worth mentioning. So why didn't you mention them? (And as long as I'm asking, what is your position on the Church-Turing thesis? Do you believe that human brains are Turing machines, and if not, what is the evidence for it?)

Ok; how are you integrating the following into the question:

>> He recognized that the normal use of language has what has come to be called a creative aspect; every human being but no beast or machine has this capacity to use language in ways that are appropriate to situations but not caused by them—this is a crucial difference. And to formulate and express thoughts that may be entirely new and do so without bound, may be incited or inclined to speak in certain ways by internal and external circumstances, but not compelled to do so. That's the way his followers put the matter—which was a mystery to Descartes and remains a mystery to us. That quite clearly is a fact. (Noam Chomsky - "The machine, the ghost, and the limits of understanding", ~9:58)

Are you essentially denying that natural language in fact functions this way? Or that anything und[erd]etermined by QFT (or computation) is 100% noise?

> > a TM does not seem to be able to define the concept of "eventually halts" [because] "eventually halts" involves the concept of infinity
>
> No, it doesn't. A TM has a finite number of internal states (not to be confused with the state of its tape), some of which are designated as halting states. Halting means reaching one of those finite number of designated states in a finite amount of time. No infinities required.

Ok; want to take a hack at how you would specify "Does this halt in finite time?" within the TM formalism? I myself have no idea how I would do it. Key here is "within the TM formalism". As far as I can understand, to state the question within a TM formalism is to provide something which can compute the answer. Is this incorrect?

> OK, but "we don't know how to compute it" is not evidence for the incompleteness of the known laws of physics. It could just be a result of our temporary inability to do and/or understand the math. Distinguishing between these two situations is very hard in general. Look at the original EPR paper. It took decades to sort that out (and the process of convincing people that it *has* been sorted out is an ongoing battle!)

I do not understand how this applies to unstable Lagrangian points, where by definition the forces cancel.

> I did. How is this any different from standard chaos theory?

My guess is that standard chaos theory does not have a concept of infinitesimal forces which act at unstable Lagrangian points. You may want to grapple more with just what Galileo did when he [re]discovered inertia. He asked if you could get motion with all the [known] forces being neutralized. I'm asking whether you can get infinitesimal forces with all the [known] forces being neutralized. When he first asked his question, it was entirely in the imagination. He had to very carefully construct systems which would show it to happen—if indeed it would happen. Without constructing those systems, the idea would have stayed imaginary.

Publius said...

@Luke
Creative language -- this seems related to the older philosophical question, "where do ideas come from?"

Luke said...

@Publius

Yes.

I've been reading A Study of Hebrew Thought and Claude Tresmontant asks in the beginning, "How can anything be added to what is?" The Greeks thought that newness was necessarily fragmentation and decay. The goal was to return to the One, to the divine thoughts of the One. In contrast, the Hebrews saw the many and matter to be "very good". (They also believed that God was one.) I suspect that the stance one takes here makes a big difference for a great many things. I suspect that a fear of newness can be deeply institutionalized in a culture with many self-reinforcing mechanisms.