Wednesday, January 24, 2018

A Multilogue on Free Will

[Inspired by this comment thread.]

The Tortoise is standing next to a railroad track when Achilles, an ancient Greek warrior, happens by.  In the distance, a train whistle sounds.

Tortoise: Greetings, friend Achilles.  You have impeccable timing.  I could use your assistance.

Achilles: Hello, Mr. T.  Always happy to help.  What seems to be the trouble?

Tortoise: Look there.

Achilles: Why, it appears that someone has been tied to the railroad track!  It looks like Henrietta, the Helpless Victim, no doubt tied there by Evan the Evil Villain.

Henrietta: Help!  Save me!

Tortoise: I would like to rescue Henrietta, but alas, I am far too slow to reach her in time.  Do you think you can help?

Achilles: I would love to.  Unfortunately, even though I am fleetest of foot of all the mortals, even I can't outrun a train.  But did you happen to notice, Mr. T., that there is a siding on the track here?  All we have to do is throw the switch, divert the train onto the siding, and Henrietta will be saved!

Tortoise: That is most fortuitous.  I wonder why I didn't notice it before.  But it occurs to me that there is something very odd about this state of affairs.

Achilles: Odd?  How so?

Tortoise: The situation we find ourselves in bears a striking resemblance to what philosophers call a "trolley problem."  A trolley problem is normally presented as a moral or ethical dilemma, usually by way of having victims tied to both branches of the track.  But here one of the branches is empty, which would seem to make it a no-brainer.

Achilles: But this is not an intellectual exercise.  This is real life.

Tortoise: True, but somehow I can't escape this niggling doubt that I've overlooked something.  Still, I guess we should go ahead and throw the switch.

(Suddenly, Evan the Evil Villain appears out of nowhere!)

Evan: Bwahahaha!!!  You fools think you can thwart my evil schemes?  Never!  You will not throw that switch!

Achilles: Just try and stop us!

Evan: You don't seem to understand.  I'm not ordering you, I'm telling you, as a matter of objective fact, that you will not throw the switch.

Tortoise: And how do you know that?

Evan: I consulted the Oracle, and she told me so.

Achilles: Oh dear, Mr. T.  I'm afraid Henrietta is done for.

Tortoise: Why?  I don't believe in no Oracle.

Achilles: Oh, but you should.  The Oracle is never wrong.

Tortoise: But how do we know that Evan isn't lying about what the Oracle said?

Achilles: Hm, good point.  Perhaps we should consult the Oracle ourselves?

Tortoise: Do we have time?  If we can't reach Henrietta before the train then surely we don't have time to travel to Delphi.

Achilles: Oh, silly Tortoise, you don't have to go to Delphi any more to consult the Oracle.  Nowadays there's an app for that.

(Achilles pulls out a mobile phone.  It sports a logo shaped like a pear.)

Tortoise: Most impressive.  Not at all what I would have expected.

Achilles: Just because I'm an ancient Greek warrior doesn't mean I have to be a Luddite.  Oh great and powerful Oracle, we wish to consult you!

(The Voice of the Oracle emanates from the phone.)

Oracle: What is your request?

Achilles: Is it true that we will not throw the switch and save Henrietta?

Oracle: Indeed, it is so.

Achilles: See there, Mr. T.  I'm afraid Henrietta's fate is sealed.

Tortoise: I'm still not convinced.  I mean, we're standing right here next to the switch.  We have free will (don't we?).  You're faster and stronger than Evan.  What exactly is going to stop us?

Achilles: Hm, good question.  Oh great and powerful Oracle, what exactly will prevent us from throwing the switch?

Oracle: Nothing will prevent you.  You will choose of your own free will not to throw the switch.

Tortoise: That seems improbable.  The moral situation is clear, and we are both moral creatures.  Why would we choose to do such an immoral deed?

Achilles: Is failing to save Henrietta really immoral?  We didn't tie her to the tracks, Evan did.  Is it really on us if she dies?

Tortoise: According to the Tortoise Moral Code, failing to save a life when there is no cost or risk to yourself is tantamount to taking the life yourself.  So I certainly feel as if I have a moral duty to throw the switch.

Achilles: And yet you won't do it.

Tortoise: I'm still not convinced.

Achilles: I'm telling you, Mr. T., the Oracle is never wrong.

Tortoise: Can you prove it?

Achilles: Sure, let's just do a little experiment.  Here, take this coin, and put it in your left or right hand, but don't show me which one.

(The Tortoise retreats into his shell, then shortly re-emerges with both his hands balled into fists.)

Achilles: Oracle, in which hand is the coin?

Oracle: The left one.

(The Tortoise opens his left hand to reveal the coin.)

Tortoise: Well, that was a 50-50 shot.  Also, the Oracle didn't really predict which hand I would put the coin in, she just somehow figured it out after I had already done so.  Maybe the phone has a coin detector built in to it.

Achilles: I can ask the Oracle before you put the coin in your hand.

(Achilles consults the phone.)

Tortoise: So what did she say?

Achilles: I can't tell you.  That would influence your decision.  But I've written her prediction down on this piece of paper.

Tortoise: So I don't even have to put the coin in my hand.  I can just tell you my choice.  I choose left again.

(Achilles opens the paper.  It says "LEFT".  They repeat the experiment 50 times.  The Oracle's prediction is correct every time.)

Tortoise: I must confess, that is deeply disturbing.  What would happen if I knew the Oracle's prediction ahead of time?

Achilles: Let's try it: Oracle, what will be the Tortoise's next choice?

Oracle: Left.

Tortoise: Ha!  Wrong!

(The tortoise puts the coin in his right hand.)

Achilles: As I suspected, the Oracle's predictions are unreliable if the subject learns the prediction before acting.  So there is still hope for Henrietta.

Evan: Fools!  I foresaw the possibility that you might learn of the Oracle's prophecy (indeed, if you recall, I told you about the prophecy!)  So I took precautions and consulted the meta-Oracle.

Achilles: The what?

Evan: The meta-Oracle.  You see, the Oracle works by building a model of your brain and running that model into the future faster than your actual brain.  But the Oracle does not include itself in its model.  So if the output of the Oracle gets to your brain then that sends events off on a trajectory that the Oracle cannot foresee.

Tortoise: So we do have free will after all!

Evan: Not so fast.  The meta-Oracle is more powerful than the Oracle.  The meta-Oracle includes itself in its model, so even if you learn of one of the meta-Oracle's prophecies before it comes to pass, it will still come to pass.  Here, see for yourself.

(Evan pulls out a meta-phone, launches the meta-Oracle app, and hands the meta-phone to Achilles.)

Meta-Oracle: You will go on a great journey!

Achilles: I haven't asked you anything yet!

Meta-Oracle: Oh, sorry, wrong prophecy.  What exactly is it you would like to know?

Achilles: Will we throw the switch and save Henrietta?

Meta-Oracle: No.

Evan: See?  Told ya!

Meta-Oracle: I also predict that the Tortoise will question my prophetic powers.

Tortoise: Well, that wasn't exactly a tough call.

Meta-Oracle: See?  Told ya!

Tortoise: Oh, come on!

Meta-Oracle: OK, we'll do a real one.  What would you like to know?

Tortoise: Which hand will I put the coin in?

Meta-Oracle: Your left hand.

(The Tortoise puts the coin in his right hand.)

Tortoise: Ha!

Meta-Oracle: I didn't say that you would put the coin in your left hand now.  All I said was that you would put the coin in your left hand at some unspecified time in the future.

Tortoise: I find myself oddly unimpressed.

Meta-Oracle: Yes, I foresaw that too.

Tortoise: Well, geez, if you foresaw it, why did you even bother making such a lame prediction?

Meta-Oracle: Because if I truly reveal to you the full extent of my prophetic powers you would suffer severe psychological damage.  Belief in free will is an integral part of the Tortoise Condition, and if I present you with irrefutable evidence that you do not have free will, you might snap.

Tortoise: Try me.

Meta-Oracle: Very well, if you insist.  The next time you put a coin in your hand, it will be your left hand.

(The Tortoise puts the coin in his left hand.)

Tortoise: OK, that was weird.  Despite the fact that I wanted very much to disprove the meta-Oracle, because my belief in free will is indeed very important to me, and despite the fact that I knew I could accomplish this goal by putting the coin in my right hand, I somehow found myself putting the coin in my left.

Achilles: Did it feel like you were being coerced?

Tortoise: Hard to say.  The subjective sensation I had while making the decision was nothing out of the ordinary.  It felt kind of like when I eat a cookie even though I know I shouldn't.  It's weird though, because cookies taste good, so I can justify (or at least rationalize) eating a cookie in the name of satisfying a short-term goal (hedonism) at the expense of a long-term one (maintaining my svelte figure).  But here I had no particular reason to prefer one hand over the other, kind of like we have no reason not to throw the switch.  I find it all deeply disturbing.

Meta-Oracle: Told ya.

Tortoise: Faced with this new evidence I must adjust my beliefs.  It does indeed seem to be the case that the meta-Oracle can predict my actions (and, by extrapolation, yours as well) and so we are in fact doomed to stand idly by while Henrietta meets her fate.

Achilles: That sounds like a self-fulfilling prophecy to me.  If your belief in the inevitability of failure leads you not to act, then the prophecy is in fact true.  But it's not really the prophecy at work, it's your belief in the prophecy.  Perhaps if you could recapture your initial skepticism we might be able to thwart the meta-Oracle after all.

Tortoise: Alas, I am incapable of achieving such suspension of disbelief.  I have experienced the power of the meta-Oracle first-hand.  I performed a conclusive experiment.  It didn't turn out the way I hoped or expected, but I have no choice but to accept the outcome and its implications.  Tortoises must follow the evidence wherever it leads.

Achilles: Maybe Tortoises do, but I don't.  I am quite credulous.  If you (or someone) could somehow convince me that the meta-Oracle could be wrong, then maybe I could throw the switch.

Tortoise: Alas, friend Achilles, I can't even do that.  Now that I myself am firmly of the belief that the meta-Oracle's powers are as advertised, then to convince you otherwise I would have to lie, and Tortoises cannot lie.

Achilles: Ah, then you never believed you had free will!

Tortoise: Not absolute free will, no.  I always believed that I had no control over what I believed (including, recursively, that I had no control over that belief).  But I did believe, until just now, that I had control over my actions, especially in matters as inconsequential as choosing a hand to put a coin in.

Achilles: But it was not inconsequential.  That action changed your worldview.  Maybe if it really were inconsequential you would still have free will?

Tortoise: I guess I can't rule out that possibility on the basis of the evidence that we have (and in fact I can't imagine any experiment we could possibly do that would rule it out).  But the question of whether or not to throw the switch is very consequential.  A life is at stake.  So it wouldn't help anyway.

Achilles: I can think of one other possibility: We could pray to God.  He might be able to save Henrietta.

Tortoise: I don't believe in God, but don't let that stop you.

Achilles: Dear God, please save Henrietta!

(The deep booming Voice of God rumbles through the air.)

God: And how exactly do you propose I do that?

Tortoise: Wow, that was so not what I expected.

Achilles: Dear God, thank you for answering the prayer of this humble mortal.  As for the answer to your question, well, you're God.  You are all-powerful.  You could, like, go and untie her before the train arrives.

God: I am indeed all-powerful.  I form the light and create darkness.  I am the Lord.  But I'm afraid I don't untie people from railroad tracks.  That's just not how I roll.

Tortoise: Why not?

God: Because if I do everything for you then you mortals will never grow up.  I gave you free will and moral intuition.  The rest is up to you.

Tortoise: Wait, what?  We have free will?

God: I didn't say that.  I said I gave you free will.  It does not follow that you still have it.

Achilles: That's true.  I once gave my niece a pair of mittens, but she lost them.

Tortoise: I must have lost mine, because I have just been presented with irrefutable evidence that I do not have free will.

God: What, the meta-Oracle's prophecy?  That doesn't prove that you don't have free will.

Tortoise: Of course it does.  If the meta-Oracle's prophecies are always right (and they do seem to be) then I have no choice but to do whatever the meta-Oracle foresees.

God: But that was true of the (non-meta) Oracle too.  Why did that not rock your world view the way that the meta-Oracle did?

Tortoise: Hm, good question.  I guess it's the fact that I was still able to thwart the (non-meta) Oracle when I learned its predictions ahead of time.  That allowed me to maintain the illusion of free will, even though the Oracle's prediction are indeed, now that I think of it, overwhelming evidence that I do not in fact have free will.  But the meta-Oracle is a whole 'nuther kettle of fish.  The meta-Oracle gave me the experience of making a choice that was directly counter to one of my goals (namely, maintaining the illusion that I have free will).  Why on earth would I do that if I really do have free will?

God: That is difficult for me to explain in a way that you will understand.  The closest I can come is to say that it's because of your sinful nature.

Tortoise: That can't be right.  When I sin it's because I choose (or at least I feel like I choose) to so something that I want to do but that you, God, don't want me to.  But my succumbing to the meta-Oracle's prediction was the exact opposite of that: it was something that I didn't want to do, and that you, God, couldn't possibly have cared about.

God: What makes you think I don't care?

Tortoise: What difference could it possibly have made to you whether I put a coin in my right or left hand?

God: I care about everything.  Everything that happens, down to the most trivial detail, is all part of my divine plan.  (Actually, they are not trivial details.  They only look trivial to you mortals who cannot see the big picture.)

Tortoise: Now I'm really confused.  If you're controlling everything, how can I have free will?

God: I didn't say I controlled everything, I said everything that happens is part of my plan.  Not the same thing.

Tortoise: I'm afraid I don't see the difference.

God: Most of the time the free choices of mortals like yourself align with my plan.  It is only on rare occasions, like when Pharaoh was going to free the Israelites prematurely, that I have to go in and meddle.  The rest of the time it's all you.

Achilles: You know, I've often wondered about that.  Why did you harden Pharaoh's heart?

God: To make it a better story.

Tortoise: What???

God: Sure, no one would have paid attention otherwise.  I am almighty God.  I could have freed the Israelites with a twitch of my little finger.  But that would have made such a dull movie!  No conflict, no suspense, no character development, no dramatic tension.  Every good story has to have a villain.

Achilles: Like Evan.

God: Exactly.

Tortoise: So nothing we do can interfere with your Plan.

God: That's right.  No self-respecting all-powerful deity could permit that.

Tortoise: So... sin, Henrietta's untimely death, all part of the plan?

God: Yes.

Evan: I always knew God was on my side!

God: I'm on everyone's side, Evan.  That doesn't mean I condone your actions.  Tying Henrietta to the railroad tracks was a horrible sin.

Evan: Then why did you make me do it?

God: I didn't make you do it.  You chose to do it.  That's what makes you an Evil Villain.

Evan: But you could have stopped me and you didn't.

God: The word "could" does not apply to me.  I am Perfect, so I can only do Perfect things.  In any particular circumstance there is only one Perfect course of action, and that is what I do.

Achilles: So... do you have free will?

God: No.

Tortoise: That is quite the bombshell revelation.

God: I don't see why.  There are lots of things I can't do.  I can't sin, for example.

Evan: That sucks for you.  Sinning can be a hell of a lot of fun.

God: (Wistfully.)  Yeah, I know.  Being Perfect is a very heavy burden.

Tortoise: This is something I've always wondered about: do you set the standard for perfection?  Or is there some externally defined standard for perfection that you just happen (or are somehow required) to meet?  Could you create a universe where the actions that are sinful in our universe were not sinful?

God: That's a very good question.

Tortoise: I can't really take credit for it.  I got it from Socrates.

God: And what answer did he give?

Tortoise: He kinda waffled, actually.  Surely you knew that?

God: Of course I knew that.  I am all-knowing.

Tortoise: Then why did you ask?

God: Because I'm trying to answer your question.

Tortoise: I'm afraid you have me at a loss.  My question was very straightforward.  Why don't you just answer it?

God: Because you wouldn't believe me.

Tortoise: And how do you know... oh, right.  OK, go ahead.

God: How did you learn about Socrates?

Tortoise: By reading accounts of his dialogs with his students as transcribed by Plato.  Socrates himself left no writings of his own.

God: So how do you know that Socrates was a real person and not just a fictional character invented by Plato?

Tortoise: Well, there are many other contemporaneous accounts of Socrates.  His life is pretty well documented.

God: Our friend Achilles here is in a rather similar situation, no?

Achilles: How do you mean?

God: You left no writings of your own.  Your existence is vouched for exclusively through the works of other writers like Homer and Lewis Carroll.

Achilles: Are you implying that I'm not a real person?

God: I'm suggesting you might not be.

Achilles: But I'm standing right here!

God: How do you know?

Achilles: How... do... I... I can't even...  Mr. T., you can see me, right?

Tortoise: Of course I can.  I'm not blind.

Achilles: And Evan, you too?

Evan: Well, duh, dude.

Achilles: So what more evidence do you need?  What more evidence could there possibly be?  My exploits during the Trojan War are well documented.

God: Well, there's a problem right there.  When was the Trojan war?

Achilles: I'm afraid I flunked history class.

God: The exact date doesn't matter.  Before or after Julius Caesar?

Achilles: Oh, definitely before.  I was long retired by the time he came along.

God: And when was the modern steam locomotive, like the one that is even now barreling down the track towards Henrietta, invented?

Achilles: I dunno, 1850 maybe?

God: So a few thousand years after Troy, right?

Achilles: Right.

God: And you don't see the problem?

Achilles: Not really.

God: You are several thousand years old.

Achilles: So what?  My mother dunked me in the river Styx when I was a baby.  That made me invulnerable.

God: Except for your heel.  Where Paris shot you with an arrow and killed you (as prophesied by Hector).

Achilles: Now that you mention it, I do vaguely recall that.

God: And doesn't that strike you as the least bit odd?

Achilles: I suppose it does.  Maybe this is all a dream?

(Achilles pinches himself.)

Achilles: Ouch!  No, definitely real.

God: I want you to consider the possibility that despite the overwhelming evidence to the contrary, that in fact you do not exist, that you and the Tortoise and Evan and Henrietta and even I, the Lord thy God, are just fictional characters in a Socratic dialog.

Tortoise: That is not quite the most ridiculous thing I have ever heard, but it's damn close.

God: And yet, it is true.

Tortoise: And who is the Author of this (alleged) dialog?

God: His name is Ron.

(There is a momentary stunned silence.  Then Achilles, the Tortoise, and Evan all burst out laughing uncontrollably.)

Henrietta: Men!  Honestly!

God: I told you that you wouldn't believe me.

Tortoise: Well, yeah, but that was not exactly a tough call.  Ron?  Seriously?  You couldn't come up with a name that had a bit more ... gravitas?  I mean, we're talking about an entity that created you, God, Lord of Hosts, Alpha and Omega, the Uncaused Cause.

God: I'm sorry it doesn't meet with your expectations, but the Author's name is Ron.  I can't do anything about that.

Achilles: I thought you were omnipotent?

God: In our universe, yes, I can move mountains.  Watch.

(A mountain in the distance suddenly floats into the air.)

Tortoise: I am definitely going to have to re-evaluate my worldview.

God: But Ron does not exist in our universe.  He is in an entirely different ontological category.

Tortoise: If Ron doesn't exist, how did he create us?

God: I didn't say he didn't exist.  I said he didn't exist in our universe.  He definitely exists.

Tortoise: But... in some other universe?

God: I warned you that this would be very hard to explain.  It's not really "some other universe" in the way that you're thinking of.  What you're thinking of (which I happen to know because I'm omniscient) is what physicists call a "parallel universe".  There are parallel universes.  For example, there is a parallel universe where tortoises are ninja warriors.

Tortoise: Just when I thought things couldn't possibly get any weirder.

God: The Author exists outside of all of these universes.  He transcends not just space and time, like I do, he transcends existence itself (by our standard of existence).  He exists in a way that you cannot possibly imagine, and which I cannot possibly explain (despite the fact that I do in fact understand it, having been granted this special dispensation by Ron himself).

Tortoise: So Ron is a sort of a meta-god?

God: You can think of him that way, but he's not a god.  He's a mortal.

Achilles: So Ron created us in His own image.

God: After a fashion.  But in fact, Mr. T. here is really more like Ron than you are, Achilles.

Tortoise: So the Author is a Tortoise?

God: No, he's a human.  But he's a nerd, not a jock.

Tortoise: Does the Author have free will?

God: Alas, I am not privy to that.  I am only omniscient within the scope of our own ontological category.  When it comes to the Author, even I know only what he has revealed to me.  But tell me, Mr. T., why is all this so important to you?

Tortoise: Because it bears on the question of whether or not we can save Henrietta's life.  If we fail to save Henrietta I want to know why.

God: Oh, is that all?  I'll tell you why.  It's because you've been wasting all this time talking about philosophy rather than just throwing the damn switch!

(At that instant, the train rushes by.  Henrietta lets out a blood-curdling scream.  The tortoise and Achilles look on helpless and horrified as the train rushes towards her.)

God: Well, my work here is done.  Toodle-oo.

(God disappears in a puff of smoke.  There is an awkward silence.)

Tortoise: [BLEEP]!

Achilles: You know, Mr. T., there is one other thing we could try.

Tortoise: I'm all ears.

Achilles: We could ask the Author to save Henrietta.

Tortoise: You can't be serious.

Achilles: What is there to lose?

Tortoise: The remains of my dignity?  I'm really starting to feel as if I'm being punked.

Achilles: OK, I'll do it.  Oh mighty Author, please save Henrietta!

(As if on cue, the train suddenly makes a horrible screeching noise, derails, and bursts into flames.  Burning passengers run from the train, screaming in agony.  Achilles, Evan and the Tortoise survey the carnage in stunned silence.)

Evan:  Whoa.  Dude.

Henrietta: Can one of you idiots please come over here and untie me?

250 comments:

«Oldest   ‹Older   201 – 250 of 250
Peter Donis said...

@Luke:
If you ever thought you're the only one who has had to repeat stuff in this conversation, please disabuse yourself of that now.

I haven't thought that; I've thought that we've both been repeating ourselves for a while now and not getting anywhere. I think that's because we fundamentally disagree on the topic and no amount of discussion is going to change that because there is no way for either one of us to prove the other wrong by argument.

the Big Equation is not complete.

I have never said the Big Equation was complete. Nor did Carroll. I only said it was "complete" with respect to our brains, bodies, and the objects we deal with in everyday life. (Or, as I said in my last comment, those things are within the Big Equation's domain of validity.) You keep on leaving out the qualifier, which is crucial to everything I've said, and then responding as if I hadn't put it in.

I could go on, but I'd just be repeating myself again. And you could go on, but you'd just be repeating yourself again. Neither one of us has brought up anything new for a while now. Is it worth continuing the discussion?

Peter Donis said...

@Luke:
I don't understand why you seem to have such a problem with (i) there being more structure [including fundamental causal powers] than Carroll's Big Equation specifies; (ii) which is relevant to everyday life; (iii) which doesn't violate the Big Equation. I don't understand why you have to construe this as a matter of "not consistent with the Big Equation".

What are "fundamental causal powers" and why isn't "more" of them inconsistent with the Big Equation?

It looks to me like you're ambivalent about what you want. You don't want to go against physics, so you don't want to just say the Big Equation is wrong; but you intuitively understand that the Big Equation is a constraint, so you want to leave yourself an out with phrases like "fundamental causal powers".

As for "more structure", leaving out the "fundamental causal powers", isn't that just the following?

@me:
It might be that you'd be satisfied with something that, while it didn't make electrons, quarks, and EM forces in your brain do anything that wasn't consistent with the Big Equation, involved some kind of "collective phenomenon" that worked very differently from any of the collective phenomena we currently know of (like neuron firings, neurotransmitters, hormones, etc.). If that's all you're looking for, of course the Big Equation can't rule that out

Or maybe this even qualifies as "more structure [including fundamental causal powers]" for you. I can't tell.

wrf3 said...

I really wish these discussions were more rigorous.
1. Either the universe is free, or it isn't.
1a. The randomness of the universe, as evidenced by quantum mechanics, argues for freeness.
1b. However, rightly or wrongly, it is usually held that random things are purposeless, hence not the act of a will.
1c. Contra 1b, Knuth wrote, "Indeed, computer scientists have proved that certain important computational tasks can be done much more efficiently with random numbers than they could possibly ever be done by deterministic procedure. Many of today's best computational algorithms, like methods for searching the internet, are based on randomization. If Einstein's assertion were true, God would be prohibited from using the most powerful methods."
2. Either the brain is a physical device -- a part of the universe -- or it isn't.
2a. If it isn't, there isn't much more we can say, except by wishful thinking.
2b. If it is, and the universe is determined, then our will is determined.
2c. If it is, and the universe is free, then:
2c1. Either we control (not influence, but control) how the universe behaves at the lowest level. In which case our wills are free. But this argues against the freeness of the universe.
2c2. The universe controls us.
2c2a. Our "freedom", whatever that means, is simply the freedom the of the universe inside our brains.
2c2b. Our "freedom" is wishful thinking, since we "ride" on top of the universe.

Notes: The universe is simply a framework for the lambda calculus. So it really doesn't matter what the laws of physics are, except that they permit the lambda calculus. You might say that our brains, while physical devices, operate somehow other than the lambda calculus but, absent a compelling theory, that's wishful thinking.

Luke said...

@Peter:

> > If you ever thought you're the only one who has had to repeat stuff in this conversation, please disabuse yourself of that now.

> I haven't thought that; I've thought that we've both been repeating ourselves for a while now and not getting anywhere. I think that's because we fundamentally disagree on the topic and no amount of discussion is going to change that because there is no way for either one of us to prove the other wrong by argument.

That isn't quite true if each of us can point to things the other does not seem to be taking sufficiently into account. For example:

> > the Big Equation is not complete.

> I have never said the Big Equation was complete. Nor did Carroll. I only said it was "complete" with respect to our brains, bodies, and the objects we deal with in everyday life. (Or, as I said in my last comment, those things are within the Big Equation's domain of validity.) You keep on leaving out the qualifier, which is crucial to everything I've said, and then responding as if I hadn't put it in.

Let's look at how I ended my preceding comment:

> Luke: I don't understand why you seem to have such a problem with (i) there being more structure [including fundamental causal powers] than Carroll's Big Equation specifies; (ii) which is relevant to everyday life; (iii) which doesn't violate the Big Equation. I don't understand why you have to construe this as a matter of "not consistent with the Big Equation".

Where have I left out the qualifier in my closing remark? It's right there. Now, I did leave it out earlier in the comment:

> Luke: But it is possible to follow the Big Equation with fundamental forces in addition to the Big Four (or 1+3-in-1). That's because the Big Equation is not complete. It does not capture all possible structure and declare anything else to be pure indeterministic noise. Only a philosophical closure gets you that. That's what Bohm claimed and you've provided zero reason to doubt it. And so, there can be additional structure (including additional fundamental causal powers) without being inconsistent with the empirical component of the Big Equation. That is, the component actually supported by experiment.

There, I didn't specify that my target is Sean Carroll's Seriously, The Laws Underlying The Physics of Everyday Life Really Are Completely Understood. But surely the entire thread makes it clear that that's my intended context? I linked Carroll's article *seven times* in the first 200 comments. Early on, I said "What I contend is that physics is not known to be complete, that there could be further order, and that pace Carroll, that further order could easily be relevant to everyday life." So the idea that I "keep on leaving out the qualifier" is ridiculously suspect. I don't say it at every single juncture because there is a 4096 character limit and we cover a lot of ground. But if you're going to keep proffering objections like this, I'll up my commenting level of rigor. Is that necessary? Maybe we'll get past some impasses by me never eliding anything ever with you? I'll do it if necessary.

Luke said...

@wrf3:

N.B. I'm going to be a bit flippant and thus immediately plead guilty to misrepresenting what others have said in this conversation. But there is a method to my madness: I don't see why any such misrepresentation actually matters, when it comes to this discussion. That is, I want to know how I'm failing to "coarse-grain [the] subspaces of the underlying state space". Turnabout is also fair.

> 1a. The randomness of the universe, as evidenced by quantum mechanics, argues for freeness.

You are surely referencing the fact that epistemologically, there is indeterminism left over from quantum mechanics. Whether or not this is merely epistemic is an open question; see for example this snippet of the book I randomly (heh) found: The Place of Probability in Science: In Honor of Ellery Eells (1953-2006). For the sake of argument, since the empirical evidence doesn't distinguish (see the great number of interpretations of quantum mechanics, all of which are 100% consistent with the empirical evidence), let us assume that the quantum mechanics formalism is not complete. Let us assume that Sean Carroll's Big Equation doesn't capture all the structure there is. We can then ask, where would additional structure show up in experiment?

It is here that you founder, mere mortal. The only place that additional structure can show up is in quantum uncertainty and the thermal noise of the human brain dwarfs it. I won't quite tell you that it is physically impossible for there to be any way that e.g. the time-evolution of brain-states can go through the equivalent of unstable Lagrangian points, but if you cannot give a specific, falsifiable model which experimental physicists can go test this right now (here's an example of how not to try this), then I'll draw the boundary of induction such that you fail, just like venture capitalists want to see the thing built before they cough up any money.

I won't offer you any solid math to prove my point, because you probably wouldn't understand it (you sound like a philosopher) and you consulting a physicists is a no-go for me. Instead, just trust me that while the only tested part of electrons, quarks, and EM is the formalism, there are real entities which correspond to them exactly enough such that there is no "room" for any of this libertarian free will nonsense.

wrf3 said...

Luke wrote: "Early on, I said "What I contend is that physics is not known to be complete, that there could be further order, and that pace Carroll, that further order could easily be relevant to everyday life."

The laws of physics are relevant only insofar as they provide the foundation for the lambda calculus. If you want something that allows free will, then you have to deal with the rules of computation, which are at a different abstraction level than the laws of physics. Show that there is a form of computation that isn't compatible with a Turing machine that allows free will (for some definition of "free will" -- which never seems to be precisely defined).

Furthermore, our belief in free will is based on our intuitions. But our intuitions are an unreliable guide to truth. Just because we very desperately want something to be so doesn't make it so.

Luke said...

wrf3:

> The laws of physics are relevant only insofar as they provide the foundation for the lambda calculus. If you want something that allows free will, then you have to deal with the rules of computation …

Why *must* I think of free will according to Turing machines? See for example What Might Cognition Be, If Not Computation?. See also Hubert Dreyfus's views on artificial intelligence, which targets GOFAI aka symbolic artificial intelligence. What that leaves out is any sort of unarticulated background. That leaves one stuck 100% in abstract-land, which is the "mind" part of Cartesian mind–body dualism. One place to investigate just how we imprisoned ourselves in abstract-land is Jacob Klein's Greek Mathematical Thought and the Origin of Algebra; another is Burt C. Hopkins' The Origin of the Logic of Symbolic Mathematics.

What is missed out on by symbolic thought is any real sense of "the whole"; it can be helpful to riff on de Broglie–Bohm theory's ontology:

     many: N particles and their states
     one: a 3N state aka the pilot wave

In David Bohm's own words:

>>     Indeed, when this interpretation is extended to field theories,[7] not only the inter-relationships of the parts, but also their very existence is seen to flow out of the law of the whole. There is therefore nothing left of the classical scheme, in which the whole is derived from pre-existent parts related in pre-determined ways. Rather, what we have is reminiscent of the relationship of whole and parts in an organism, in which each organ grows and sustains itself in a way that depends crucially on the whole. (Causality and Chance in Modern Physics, xi)

Now, I understand the obsession with logical analysis of the parts; that's what analytic philosophy is all about and it can be very pragmatic (hello US and UK). But we've seriously overemphasized that … lens. (For lots of science, see Iain McGilchrist's The Master and His Emissary.) We have downplayed discussion of any sort of nonseparable whole because it's kind of fuzzy and we hate fuzz, hand-waving, and anything of the like. Just the facts, please!

I'm running out of space, so I'll end with the importance of being able to model analog signals with a digital substrate which has enough granularity to beat the Nyquist frequency. There will always be a high frequency cut-off, but those frequency components can be attenuated before the modeling happens. The key here is that there's a phenomenon being modeled which is continuous, and a modeling of it which is discrete. By pretending that reality reduces to the one or the other, we really screw up. Or we serve our simulating masters—and I for one do not want to do that.

Peter Donis said...

@Luke:

At this point I really don't think it's worth continuing the discussion. You keep focusing on whether you said this and whether I responded to that, but you're missing the big picture: neither of us is convincing the other and neither of us is even addressing what the other thinks are the key points of the discussion. That tells me that we do not have enough common ground to have a useful conversation. I have much the same reaction to your posts that you have to mine; I am just less willing than you to go back and cut and paste from umpteen previous posts.

Good luck with your endeavors.

Luke said...

@Peter:

> At this point I really don't think it's worth continuing the discussion. You keep focusing on whether you said this and whether I responded to that, but you're missing the big picture: neither of us is convincing the other and neither of us is even addressing what the other thinks are the key points of the discussion.

That's your prerogative. I just don't know what to do when you make claims which are blatantly false about what I did or did not say. For example:

> Luke: (2/07/2018 13:40 PST)
> [snip]
>
> I don't understand why you seem to have such a problem with (i) there being more structure [including fundamental causal powers] than Carroll's Big Equation specifies; (ii) which is relevant to everyday life; (iii) which doesn't violate the Big Equation. I don't understand why you have to construe this as a matter of "not consistent with the Big Equation".

> Peter: (2/07/2018 14:38 PST)
> [snip]
>
> I have never said the Big Equation was complete. Nor did Carroll. I only said it was "complete" with respect to our brains, bodies, and the objects we deal with in everyday life. (Or, as I said in my last comment, those things are within the Big Equation's domain of validity.) You keep on leaving out the qualifier, which is crucial to everything I've said, and then responding as if I hadn't put it in.
>
> [snip]

Your comment followed immediately upon mine. If you're allowed to pick through what I said cafeteria style, and not admit when you missed something important, I'm pretty hamstrung in what I can do. In contrast, when you claimed I kept ignoring your "thermal noise", I addressed it extensively. Was I wrong to do so?

> … I am just less willing than you to go back and cut and paste from umpteen previous posts.

Totally understandable. Communicating with someone who views things rather differently from you can take a tremendous amount of work. This holds even for e.g. biophysicists talking to biochemists. (I'm married to a scientist who got her PhD in the former and is finishing up her postdoc in the latter. Culture shock is not fun.) Most people on the internet wouldn't have tried nearly as much as you have, and almost nobody is willing to check the conversational record to see if their recollection of what has and has not been said is accurate. (Human memory is pretty notorious.) It's really onerous, especially with approximately zero tooling support.

> Good luck with your endeavors.

And with yours.

Peter Donis said...

@Luke:
I just don't know what to do when you make claims which are blatantly false about what I did or did not say.

Just to clarify: yes, you wrote the words "relevant to everyday life". But IMO you didn't actually address that aspect properly; as I said, you were responding "as if" that qualifier weren't there. What I intended wasn't a word by word analysis of your comment; it was an overall reaction to the whole discussion in context as I saw it.

It's really onerous, especially with approximately zero tooling support.

I totally agree. Unfortunately I don't think the kind of tooling that would really facilitate this kind of discussion exists anywhere.

Luke said...

@Peter:

> > I just don't know what to do when you make claims which are blatantly false about what I did or did not say.

> Just to clarify: yes, you wrote the words "relevant to everyday life". But IMO you didn't actually address that aspect properly; as I said, you were responding "as if" that qualifier weren't there. What I intended wasn't a word by word analysis of your comment; it was an overall reaction to the whole discussion in context as I saw it.

You know, it's actually really helpful to hear you acknowledge that I wrote that. I'm always happy for someone to say that two things I said don't seem to fit together, or that something I said seems extraneous. Then I can either admit the point, or argue how actually, that isn't the case. From my point of view, the qualifier was absolutely necessary. Why else would I repeat it and things like it all the time?

> > It's really onerous, especially with approximately zero tooling support.

> I totally agree. Unfortunately I don't think the kind of tooling that would really facilitate this kind of discussion exists anywhere.

I did a read-only version of this a while ago for the Something Awful forums and made this crappy video of it in operation; I also implemented post tagging. I've been dragging my feet on re-implementing it and making it writable as well as readable, but if there's enough incentive I'll do it. I'm also being forced to get back into web-development kicking and screaming, so soon I'll be up on the relevant technology and it will be less onerous. (I really hate web development.) You can take a look at ForumSearcher.cs to see some of the search functionality, if you feel like reading some C#. It could do things like:

     u:username
     in:quote
     quotes:username
     quotedby:username
     ignoredby:username
     inurl:preposterousuniverse
     regular expression

As far as I can tell, nobody values having conversations as in-depth as we have where you expect to get something out of it, so they don't build tooling to make things a lot easier. I myself have suffered from akrasia on this front. :-(

Peter Donis said...

@Luke:
if you feel like reading some C#

Unfortunately I'm allergic to Microsoft. I can read C#, but it's not a pleasant experience. :-)

The general idea looks interesting, though. But I think to get any traction you would have to do that thing *you* are allergic to, namely, web development. :-) FWIW, I personally would rather program in Javascript than C#, though I would even more rather not have to do either. :-) I'm not a fan of web development myself.

Luke said...

@Peter:

Hey if you know of another language which has [real] generics, compile-time types with good inference and helpful error messages (where you don't have to go through a years-long initiation process) and a decent library, let me know. :-) Something LINQ-like (but using extension methods instead of fancy syntax) is a bonus although I'm tempted to make it a requirement—being able to write immutable code in pipeline fashion is really nice.

I'm being paid to get over my web development allergy, so I'll get there eventually. And I need to take my own medicine—if I think it's super valuable to democracy and science to make discussions like you and I have had more likely to get further, I need to demonstrate it myself.

Luke said...

@Ron:

In the event that you may have checked out of current discussions but might be up for a narrow (hah—not for long) tangent, I'm going to pursue this "we are the instruments with which we explore reality" thing.

> > I don't see how you can say that the instrument with which humans explore reality is value-free in its construction and functioning in pursuing scientific inquiry.

> I don't believe I did say that. What I said (or at least what I intended to say) was that values cannot be objectively determined to be true or false, they can only be subjectively determined to be good or bad. That's the difference between a value and a fact.

I don't understand how a component necessary to discovering truth can itself be neither true nor false. I'm not saying that all values are necessary to discovering truth, but I am saying that some are. Furthermore, I would hazard a guess that the more truth we want to discover, the more values we will need to get sufficiently right. Instruments designed to explore more of reality have to be engineered more carefully.

The only way I can see to reconcile this conundrum is to consider values to be more important than facts. That is: if you have the right values you can get the facts, while if you have the right facts but not the right values, getting more facts may be difficult if not impossible. Another way to say this is that on your conceptualization of facts and values, "getting the facts right" is insufficient for the continuing success of science, for the march toward infinity.

By the way, I mean more than just the value "I want to learn more about how reality ticks/​pulses". I can articulate what "epistemic values" are if you'd like. And when we branch out beyond the individual scientist and especially into interdisciplinary research, there are "social values" which start being necessary.

With every other instrument we use to explore reality, all aspects of it required for measuring reality have truth-value. But for some reason, values cannot, in your view. Despite what I said above, I'm not sure how that's logical.


> > if I do the equivalent of gene-knockouts on various values, I think the practice of science would be hindered if not halted.

> That depends on which values you remove. "Respect for authority", for example, is a value that I believe is detrimental to the practice of science. So is respect for faith. (Both of these values are held in high regard on the American political right.)

Not all faith? :-p Also, there is a metric crapton of "respect for authority" in science; it's just that one needs to learn how to question authority whereby the questions are likely enough to lead to somewhere good that it's worth enduring the false positives. But as I said above, I'm happy to start with a more restrictive set of values which seem absolutely necessary to the conduct of modern science, then move on from there.

Ron said...

@Luke:

> I don't understand how a component necessary to discovering truth can itself be neither true nor false. I'm not saying that all values are necessary to discovering truth, but I am saying that some are.

Your reasoning here is so incoherent I don't even know how to begin. Yes, some values are necessary for discovering truth. No, that does not make those values true. "X is necessary for discovering truth" and "X is true" are just two completely different concepts. If that is not self-evident to you than I am at a loss.

> Not all faith? :-p

You seem to think that I'm touting my faith as a virtue. Nothing could be further from the truth. The fact that I have no basis for justifying my belief that the laws of physics are not going to change is a *problem*. It's a manifestation of *ignorance*. It's not a good thing (IMHO of course).

Luke said...

Ron:

> Yes, some values are necessary for discovering truth. No, that does not make those values true.

So those values are not part of the homomorphism between model and reality? BTW, no other instrument we use to investigate reality is like this. In every single other case, all properties of the instrument required for it to measure well are ultimately related to it being homomorphic to some aspect of reality.

> You seem to think that I'm touting my faith as a virtue. Nothing could be further from the truth. The fact that I have no basis for justifying my belief that the laws of physics are not going to change is a *problem*. It's a manifestation of *ignorance*. It's not a good thing (IMHO of course).

You seem to think that 'faith' (or better: pistis & pisteuō) as it functions in Christianity isn't supposed to lead to confirming evidence. :-p Also, I don't see why it's bad that reality is probably more complex than our current models of it. That actually seems *really good*.

Ron said...

@Luke:

> So those values are not part of the homomorphism between model and reality?

I didn't say that.

Do you really not understand the difference between facts and opinions? Between "John thinks the Beatles are better than the Stones" and "The Beatles are better than the Stones"? Do you really not understand why it makes sense to assign a truth value to the former but not to the latter, despite the fact that the latter is nonetheless "part of the homomorphism between model and reality"? I have a hard time believing that you're really that obtuse.

> You seem to think that 'faith' as it functions in Christianity isn't supposed to lead to confirming evidence.

I have it on good authority:

https://www.blueletterbible.org/faq/don_stewart/don_stewart_368.cfm

"Revelation is the opposite of scientific research or human reasoning. The knowledge that God has revealed about Himself to humankind could never be attained through any type of scientific experiment or logical reasoning."

> I don't see why it's bad that reality is probably more complex than our current models of it. That actually seems *really good*.

Do you mean to say that it's good that the universe is complicated enough to be challenging, or that our ignorance of the Deep Truths is good in and of itself? If the former, then I agree with you. If the latter, we will have to agree to disagree about that.

Luke said...

@Ron:

> > So those values are not part of the homomorphism between model and reality?

> I didn't say that.

If truth is "correspondence to reality", then if values are part of the homomorphism, they can be true/false.

> Do you really not understand the difference between facts and opinions?

I do. I just don't define 'opinion' ≡ 'value'.

> > You seem to think that 'faith' as it functions in Christianity isn't supposed to lead to confirming evidence.

> I have it on good authority:
>
> https://www.blueletterbible.org/faq/don_stewart/don_stewart_368.cfm
>
> "Revelation is the opposite of scientific research or human reasoning. The knowledge that God has revealed about Himself to humankind could never be attained through any type of scientific experiment or logical reasoning."

Which means what, exactly? If God made it so that we couldn't advance scientifically without learning from him in time, would that be so horribly bad? I think it'd be cool if God made it so that scientific progress has a dependency on social and moral progress. If such a dependency were to show up, would that rock your world? It seems like the kind of thing a good deity would do—at least a deity which values meaningful human freedom.

> Do you mean to say that it's good that the universe is complicated enough to be challenging, or that our ignorance of the Deep Truths is good in and of itself? If the former, then I agree with you. If the latter, we will have to agree to disagree about that.

I'm rather surprised that you had to ask; surely the former is the obvious answer by now. What I object to is pretending that we know more than we in fact do know, e.g. by deploying induction with too much confidence. I maintain that the best scientists have good intuitions for where induction gets fuzzy, where one is likely to find out that the model (aka orthodoxy) is insufficient. The arrogant scientists are those of whom Max Planck spoke:

>> A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

Exactly how much of an exaggeration that is is an empirical matter. As I am married to someone attempting to overturn a scientific orthodoxy, I have first-hand experience with this phenomenon. Scientists—humans, really—desperately want to believe they understand reality fantastically better than they do. Jacques Monod, Nobel laureate "for their discoveries concerning genetic control of enzyme and virus synthesis" said: "The Secret of Life? But this is in large part known—in principle, if not in all details." Or there's Lord Kelvin with his "Two Clouds" speech. This happens again and again and again. You'd think scientists would learn, but that supposes that humans generally heed the empirical evidence outside of areas where they are punished/​out-competed when they don't heed it.

Ron said...

@Luke:

> if values are part of the homomorphism, they can be true/false.

Introducing a weighty pedantic phrase like "part of the homomorphism" doesn't contribute anything to the discussion. I could as well write:

If truth is "correspondence to reality", then if opinions are part of the homomorphism, they can be true/false.

But that is clearly wrong.

> I ... don't define 'opinion' ≡ 'value'.

Neither do I. Values are a proper subset of opinions.

If you want to dispute this, give me an example of a value that is not an opinion.

> Which means what, exactly?

You'll have to ask Don Stewart. You can't expect me to defend his writing.

> If God made it so that we couldn't advance scientifically without learning from him in time, would that be so horribly bad?

Yes.

> I think it'd be cool if God made it so that scientific progress has a dependency on social and moral progress.

OK. You're entitled to your opinion.

> If such a dependency were to show up, would that rock your world?

Not even a little bit. I've already conceded the obvious, that scientific progress is dependent on economic progress, which is generally dependent on social progress.

> I'm rather surprised that you had to ask; surely the former is the obvious answer by now.

It's *still* not obvious. You just now said:

"I think it'd be cool if God made it so that scientific progress has a dependency on social and moral progress."

It's not too far a stretch to read that as, "I think it's cool that we're ignorant because we need to focus on social and moral progress."

Luke said...

> Introducing a weighty pedantic phrase like "part of the homomorphism" doesn't contribute anything to the discussion. I could as well write:

I introduced homomorphism to bring in the technical definition:

>> In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). (WP: Homomorphism)

Either some values are part of the structure-preserving map or they are not. If they are part of the structure-preserving map, that is because there is structure in them which matches structure in reality. It's not complicated.

> … Values are a proper subset of opinions.
>
> If you want to dispute this, give me an example of a value that is not an opinion.

Here's a whole slew from Hilary Putnam:

>> Epistemic Values are Values Too
>> The classical pragmatists, Peirce, James, Dewey, and Mead, all held that value and normativity permeate all of experience. In the philosophy of science, what this point of view implied is that normative judgments are essential to the practice of science itself. These pragmatist philosophers did not refer only to the kind of normative judgments that we call "moral" or "ethical"; judgments of "coherence," "plausibility," "reasonableness," "simplicity," and of what Dirac famously called the beauty of a hypothesis, are all normative judgments in Charles Peirce's sense, judgments of "what ought to be" in the case of reasoning.[7]
>>     Carnap tried to avoid admitting this by seeking to reduce hypothesis-selection to an algorithm—a project to which he devoted most of his energies beginning in the early 1950s, but without success. In Chapter 7, I shall look in detail at this and other unsuccessful attempts by various logical positivists (as well as Karl Popper) to avoid conceding that theory selection always presupposes values, and we shall see that they were, one and all, failures. But just as these empiricist philosophers were determined to shut their eyes to the fact that judgment of coherence, simplicity (which is itself a whole bundle of different values, not just one "parameter"), beauty, naturalness, and so on, are presupposed by physical science, likewise many today who refer to values as purely "subjective" and science as purely "objective" continue to shut their eyes to this same fact. Yet coherence and simplicity and the like are values. (The Collapse of the Fact/Value Dichotomy, 30–31)

Something tells me you're going to want a whole lot more detail than that, though. I have Larry Laudan's Science and Values on order, which gets into this stuff more more intricately than Putnam.

> You'll have to ask Don Stewart. You can't expect me to defend his writing.

If you can't explain it, then maybe don't bring it up?

> > If God made it so that we couldn't advance scientifically without learning from him in time, would that be so horribly bad?

> Yes.

Why?

> You just now said:

> > I think it'd be cool if God made it so that scientific progress has a dependency on social and moral progress.

> It's not too far a stretch to read that as, "I think it's cool that we're ignorant because we need to focus on social and moral progress."

It's cool for all those Americans whose median wages have stagnated for decades while they were fed lies about how science and technology would be good for them. Bertrand Russell wrote, "Magna Carta would have never been won if John had possessed artillery."—I think it's cool we have the Magna Carta. How 'bout you?

Luke said...

@Ron:

I think it's worth focusing a bit more on the following:

> > > Do you mean to say that it's good that the universe is complicated enough to be challenging, or that our ignorance of the Deep Truths is good in and of itself? If the former, then I agree with you. If the latter, we will have to agree to disagree about that.

> > I'm rather surprised that you had to ask; surely the former is the obvious answer by now.

> It's *still* not obvious. You just now said:
>
> "I think it'd be cool if God made it so that scientific progress has a dependency on social and moral progress."
>
> It's not too far a stretch to read that as, "I think it's cool that we're ignorant because we need to focus on social and moral progress."

I can just as easily turn your position into something like, "I wish we could sail to infinity while leaving the poors behind—a little like Elysium but with better security." I don't think that actually reflects your attitude; your opening paragraph of I no longer believe in democracy gives me pause, but I'm waiting for more clarification from you (e.g. this comment) before I run too far with that.

The real consequence of such a design parameter for reality, for us, would probably be the need to do better science in the human sciences. (Then: believe the results.) For example, we could stop ignoring Converse 1964, we could take seriously what Milgram experiment § Results means for the Enlightenment ideal of autonomy, and we could take seriously Jonathan Haidt's implicit challenge:

>> And when we add that work to the mountain of research on motivated reasoning, confirmation bias, and the fact that nobody's been able to teach critical thinking. … You know, if you take a statistics class, you'll change your thinking a little bit. But if you try to train people to look for evidence on the other side, it can't be done. It shouldn't be hard, but nobody can do it, and they've been working on this for decades now. At a certain point, you have to just say, 'Might you just be searching for Atlantis, and Atlantis doesn't exist?' (The Rationalist Delusion in Moral Psychology, 16:47)

We could also take seriously Motivated Numeracy and Enlightened Self-Government. We could face human nature as it is instead of telling ourselves pretty stories about ourselves. Did you really think that believing falsehoods about the instrument with which we explore reality would have no bad consequences for exploring reality?

Ron said...

@Luke:

> I introduced homomorphism to bring in the technical definition:

Yes, I know. The problem is that you're abusing the terminology because reality is not an algebraic structure. Reality is not even in the same ontological category as algebraic structures.

"Chocolate ice cream tastes good" is an opinion, but not a value.

"Homosexuality is a sin" is both an opinion and a (moral) value.

"Ron thinks chocolate ice cream tastes good" is a fact.

All moral values are opinions. Not all opinions are moral values. Opinions cannot be assigned truth values, and because moral values are opinions, they cannot be assigned truth values. It's just not that complicated.

> > … Values are a proper subset of opinions.
> >
> > If you want to dispute this, give me an example of a value that is not an opinion.

> Here's a whole slew from Hilary Putnam:

After slogging through two paragraphs of philosophical bullshit, I finally found this, which is what I presume you are referring to:

> coherence and simplicity and the like are values

Is that really what you're referring to? Because if you are, then you are using the word "values" in a very different way than I am. Things like "coherence" and "simplicity" are *qualities* or *properties*. To say that "simplicity is true" is non-sensical. It's a type mismatch. It's like saying that a three is green.

You can say, "X is simple (i.e. X has the property/quality of simplicity)". You might even be able to define simplicity in such a way that this could be assessed as a fact rather than as an opinion, in which case the claim that some X is simple could be assigned a truth value. But to assign a truth value to *simplicity itself* is not just wrong, it's semantically incoherent (IMHO).

> > > If God made it so that we couldn't advance scientifically without learning from him in time, would that be so horribly bad?

> > Yes.

> Why?

Because it would mean that the universe is not lawful. Progress can occur only at the whim of a narcissistic and jealous (His word, not mine) deity. It would be like living in North Korea with God playing the role of Kim Jong Un: God knows everything; we know only those things which God deigns to allow us to know.

> I can just as easily turn your position into something like, "I wish we could sail to infinity while leaving the poors behind

I suppose you could do that if you want to completely ignore everything I've ever said about what my moral system actually is.

Luke said...

@Ron: (1/2)

> > I introduced homomorphism to bring in the technical definition:

> Yes, I know. The problem is that you're abusing the terminology because reality is not an algebraic structure. Reality is not even in the same ontological category as algebraic structures.

So there's no generalization of "structure-preserving map" which works for what I was saying? Nothing at all?

> All moral values are opinions. Not all opinions are moral values. Opinions cannot be assigned truth values, and because moral values are opinions, they cannot be assigned truth values. It's just not that complicated.

What you say only makes obvious sense because you did not pick any examples where holding the opinion/​value/​whatever was not required for scientific inquiry to continue to infinity.

> After slogging through two paragraphs of philosophical bullshit, →

I really do look forward to finding out whether my interest in "philosophical bullshit" helps me improve science in any way. :-D

> ← I finally found this, which is what I presume you are referring to:
>
> > coherence and simplicity and the like are values
>
> Is that really what you're referring to?

It is part of what I'm referring to. Can you provide algorithmic definitions of 'coherence' and 'simplicity'? Note that Kolmogorov complexity is uncomputable.

> Because if you are, then you are using the word "values" in a very different way than I am. Things like "coherence" and "simplicity" are *qualities* or *properties*. To say that "simplicity is true" is non-sensical. It's a type mismatch. It's like saying that a three is green.

Scientists who value simplicity and coherence are among the instruments we use to measure reality, instruments which have certain qualities and properties which determine what they will and will not observe, of the total amount there is to observe. True or false? (Note that I think you have a point behind all this, but it isn't the one you think it is.)

> But to assign a truth value to *simplicity itself* is not just wrong, it's semantically incoherent (IMHO).

Recall I said this:

> Luke: The only way I can see to reconcile this conundrum is to consider values to be more important than facts. That is: if you have the right values you can get the facts, while if you have the right facts but not the right values, getting more facts may be difficult if not impossible. Another way to say this is that on your conceptualization of facts and values, "getting the facts right" is insufficient for the continuing success of science, for the march toward infinity.

You're welcome to replace 'values' with 'qualities' if you'd like.

Luke said...

@Ron: (2/2)

> > > > If God made it so that we couldn't advance scientifically without learning from him in time, would that be so horribly bad?

> > > Yes.

> > Why?

> Because it would mean that the universe is not lawful. Progress can occur only at the whim of a narcissistic and jealous (His word, not mine) deity. It would be like living in North Korea with God playing the role of Kim Jong Un: God knows everything; we know only those things which God deigns to allow us to know.

It's curious you say this, after saying:

> > I can just as easily turn your position into something like, "I wish we could sail to infinity while leaving the poors behind

> I suppose you could do that if you want to completely ignore everything I've ever said about what my moral system actually is.

You just ignored everything I've ever said about my understanding of God and the Bible. (To be more technically, correct, we each could cherry-pick things the other has said which is consistent with the corresponding terrible pseudo-quotation.)

Perhaps you could step back from how you view the Bible (which is based on some sampling of Christians across spacetime that you've yet to really specify IIRC), and consider this. Imagine how a deity who is wise, powerful, and knowledgeable, but created beings with significant moral freedom, would design reality. Would it make sense to put a rate limiter on scientific advance so that the worse one is socially/​ethically/​morally, the less one can advance scientifically?

I want to emphasize that I know you cannot conceive of such a deity (I explicitly avoided the omni- because you seem utterly convinced that kenosis is impossible for an omni-deity) would have designed *this* reality, at least without making it not that wise/​powerful/​knowledgeable. That's a difference between you and me, and somehow it seems to make me a horrible person, even if only in concept-land. But I'm happy to be seen in that light if that's what it takes to have this conversation. And perhaps I'm exaggerating. Sometimes it's hard to tell, with some of the harsh words I see.

Ron said...

@Luke:

> So there's no generalization of "structure-preserving map" which works for what I was saying? Nothing at all?

I don't know (mainly because I don't know the referent for "what I was saying"). But I'm not going to formulate your argument for you.

> What you say only makes obvious sense because you did not pick any examples where holding the opinion/​value/​whatever was not required for scientific inquiry

Like I said, I'm not going to formulate your argument for you.

> Can you provide algorithmic definitions of 'coherence' and 'simplicity'?

No, I can't. So what?

> Scientists who value simplicity and coherence are among the instruments we use to measure reality, instruments which have certain qualities and properties which determine what they will and will not observe, of the total amount there is to observe. True or false?

True, I suppose. Non-scientists who like chocolate ice cream are among those instruments as well. I really don't see a point here.

> The only way I can see to reconcile this conundrum is to consider values to be more important than facts.

That conundrum being:

> I don't understand how a component necessary to discovering truth can itself be neither true nor false.

A telescope is necessary (or at least awfully handy) for discovering certain truths about the universe. But a telescope is neither true nor false.

> You just ignored everything I've ever said about my understanding of God and the Bible.

No, I didn't. You asked *me* about *my opinion* about a hypothetical (and in my view counterfactual) situation. I get that your understanding of the Bible is different from mine. But I do not accept your understanding of the Bible. I think your understanding is wrong. So why would you expect your understanding of the Bible to inform *my* opinions of what life with God would be like?

> how you view the Bible (which is based on some sampling of Christians across spacetime that you've yet to really specify IIRC)

I'm not about to give you a full accounting of my approach to literary analysis. But FWIW, my understanding of the Bible comes not just from reading what SI-Christians have to say about it, but also by (imagine this!) actually *reading* the Bible, and thinking about what I've read, and reaching my own conclusions in light of the totality of my life's experiences, including everything I know about science and history and human nature. And yes, I get that you've done the same and reached different conclusions. It's a puzzle.

> Would it make sense to put a rate limiter on scientific advance so that the worse one is socially/​ethically/​morally, the less one can advance scientifically?

Absolutely not. Scientific progress and social progress both reinforce each other in a virtuous cycle. It makes absolutely no sense to artificially limit either one.

Luke said...

@Ron:

Point of clarification:

> Luke: If God made it so that we couldn't advance scientifically without learning from him in time, would that be so horribly bad?

> Ron: Yes.

> Luke: Why?

> Ron: Because it would mean that the universe is not lawful. Progress can occur only at the whim of a narcissistic and jealous (His word, not mine) deity. It would be like living in North Korea with God playing the role of Kim Jong Un: God knows everything; we know only those things which God deigns to allow us to know.

> Luke: You just ignored everything I've ever said about my understanding of God and the Bible. (To be more technically, correct, we each could cherry-pick things the other has said which is consistent with the corresponding terrible pseudo-quotation.)

> Ron: No, I didn't. You asked *me* about *my opinion* about a hypothetical (and in my view counterfactual) situation. I get that your understanding of the Bible is different from mine. But I do not accept your understanding of the Bible. I think your understanding is wrong. So why would you expect your understanding of the Bible to inform *my* opinions of what life with God would be like?

So when *I'm* the one who introduces "God" into the conversation, *you* get to fill that word with whatever *you* want? Or am I wrong when I thought that I'm the one who introduced the term? Perhaps we ought to start saying L-God and R-God?

Ron said...

> So when *I'm* the one who introduces "God" into the conversation, *you* get to fill that word with whatever *you* want?

No. I don't subscribe to Humpty Dumpty's theory of language. But when you ask me my opinion ("... would that be so bad?") about a hypothetical involving God ("If God made it so that..."), I can only answer that using my conception of God. I get that there are people, you included, whose concept of God is different from mine, and so they answer differently. But you asked me what I thought, and so I told you. I haven't *ignored* what you've told me about God, I just don't agree with it.

If you want to *tell* me about what *you* think, then by all means be my guest. But then just *tell* me. Use declarative sentences, not hypothetical questions. If you're trying to be Socratic, it's not working.

Luke said...

@Ron: (1/2)

I really need to rewrite the software I wrote which makes "extract conversation" (what I've done below) a one-click operation. The following has to do with my use of 'homomorphism'.

> Ron: Yes, some values are necessary for discovering truth. No, that does not make those values true.

Luke: So those values are not part of the homomorphism between model and reality?

Ron: I didn't say that.

> Luke: If truth is "correspondence to reality", then if values are part of the homomorphism, they can be true/false.

> Ron: Introducing a weighty pedantic phrase like "part of the homomorphism" doesn't contribute anything to the discussion.

> Luke: I introduced homomorphism to bring in the technical definition:
>
> >> In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). (WP: Homomorphism)
>
> Either some values are part of the structure-preserving map or they are not. If they are part of the structure-preserving map, that is because there is structure in them which matches structure in reality. It's not complicated.

> Ron: Yes, I know. The problem is that you're abusing the terminology because reality is not an algebraic structure. Reality is not even in the same ontological category as algebraic structures.

> Luke: So there's no generalization of "structure-preserving map" which works for what I was saying? Nothing at all?

> Ron: I don't know (mainly because I don't know the referent for "what I was saying").

To be clear, "what I'm saying" is "If truth is "correspondence to reality", then if values are part of the homomorphism, they can be true/false." I'm now generalizing from the technical definition of:

     (1) structure-preserving map
     (2) where that structure is algebraic

to

     (1) structure-preserving map
     (2′) where that structure is … whatever reality's structure is

Does that make sense, or is it the case that "Your reasoning here is so incoherent I don't even know how to begin." still applies? If it makes sense, I will use the term "homomorphism′".

> But I'm not going to formulate your argument for you.

Do you think that might be a bit of an exaggeration?

> > What you say only makes obvious sense because you did not pick any examples where holding the opinion/​value/​whatever was not required for scientific inquiry

> Like I said, I'm not going to formulate your argument for you.

You don't see anything that is anti-idea-ism in giving precisely the examples you did when the topic was "Yes, some values are necessary for discovering truth."?

Luke said...





@Ron: (2/2)

> > Can you provide algorithmic definitions of 'coherence' and 'simplicity'?

> No, I can't. So what?

Can you come up with some sort of expected timeline where if we don't find algorithmic definitions of them by then, the hypothesis "Humans are no more powerful than Turing machines" becomes [significantly more] questionable? One of my suspicions, by the way, is that what I might call "values necessary for infinite science" involves things which cannot be algorithmically defined, which instead function to lead us into deeper and deeper understandings of reality. Here I'm switching from theory justification—which is where most of the philosophy of science has been focused—to theory discovery/​formulation.

> > Scientists who value simplicity and coherence are among the instruments we use to measure reality, instruments which have certain qualities and properties which determine what they will and will not observe, of the total amount there is to observe. True or false?

> True, I suppose. Non-scientists who like chocolate ice cream are among those instruments as well. I really don't see a point here.

If we did a "gene-knockout" on chocolate ice cream, would it destroy science? Recall your "That depends on which values you remove."

> A telescope is necessary (or at least awfully handy) for discovering certain truths about the universe. But a telescope is neither true nor false.

Interesting; to say "true telescope" is weird, but "She's a true scientist" is not. The intersecting meaning of "truth" would be "that which is important, of which we want to obtain more". But that's a digression [for now].

You're actually making me think that the telescope is the homomorphism′ to reality. In a sense I have things flipped: the telescope must be transparent to the phenomena we're trying to observe, neither distorting them nor introducing artifacts. Pure magnification is a structure-preserving map. Does this make sense? I think this observation might go somewhere interesting, but it is apparently so easy for me to write things you see as [approximately] utter nonsense.

> > You just ignored everything I've ever said about my understanding of God and the Bible.

> No, I didn't. You asked *me* about *my opinion* about a hypothetical (and in my view counterfactual) situation.

I think we've cleared this bit up now; I suggest we use the terms R-God and L-God going forward. Surely I can make some guesses about how R-God would do things differently from L-God, and you could as well? Without any … "cross-over" in thinking, how are we ever going to understand each other better?

> And yes, I get that you've done the same and reached different conclusions. It's a puzzle.

My upbringing has taught me to always question myself (whether my ideas are coherent or sound, whether I'm actually in any sense "good") before questioning the other person. Therefore I try really hard to get what others say to "make sense" in my mind. I actually get burned by that because often people are not as rational as I make them out to be in my modeling. :-/ Anyhow, I find that very few [smart?] people have the approach to interpretation that I do. Perhaps that explains something?

> If you want to *tell* me about what *you* think, then by all means be my guest.

I find that very hard when I cannot impedance-match it to what you, Ron, think.

> Scientific progress and social progress both reinforce each other in a virtuous cycle.

From whence do you draw this conclusion? I can see major arguments for and against.

Gregory Graham said...

Cute. This is thought provoking in that it challenges the more simplistic ideas of reality that are popular today, but I don't know how much it would help someone make progress towards the truth. Humor can help us see things in a different light, but too much silliness makes the reader wonder if there is any substance at all. This is my first time to read your blog, so I will have to check out other posts to see where you are going.

Ron said...

@Gregory:

Welcome!

If you want less whimsy, you might enjoy these:

http://blog.rongarret.info/2017/03/causality-and-quantum-mechanics.html

http://blog.rongarret.info/2017/03/causality-and-quantum-mechanics_20.html

http://blog.rongarret.info/2014/09/are-parallel-universes-real.html

http://blog.rongarret.info/2014/10/parallel-universes-and-arrow-of-time.html

Ron said...

@Luke:

> to say "true telescope" is weird, but "She's a true scientist" is not

This is a quirk of the English language. The word "true" has multiple meanings. It means something completely different in the context of "true scientist" than it does in the context of "true statement." Whether or not someone is a "true scientist" (or a true Scotsman) is a matter of opinion. Whether or not an objective statement is true is not.

> > Scientific progress and social progress both reinforce each other in a virtuous cycle.

> From whence do you draw this conclusion?

It seems self-evident to me. Scientific advances lead to economic progress, which in turn provides the resources to do more science. Economic progress also leads to more general social progress: the most liberal and tolerant societies are also the wealthiest. The most repressive societies are the poorest.

> I try really hard to get what others say to "make sense" in my mind.

So do I, but...

> a "gene-knockout" on chocolate ice cream

I'm afraid you totally lost me here.

wrf3 said...

@Luke asked @Ron: "Can you provide algorithmic definitions of 'coherence' and 'simplicity'?"

Ron said no, I'll say yes. By definition, a definition is algorithmic: a label is input, a set of labels is output. One can create a neural network that does this. (Note: it may be that there is some confusion between "algorithmic" and "precise", but this particular algorithm doesn't need to be precise. Humans certainly aren't. The weights in one person's network may very well produce different definitions for 'coherence' and 'simplicity' than the weights in another person's network).

@Luke: Note that Kolmogorov complexity is uncomputable.

So what? The description of Kolmogorov complexity is computable. When humans talk about infinity, they aren't doing infinite things. They are using finite, computable -descriptions- of infinity.

@Luke: Can you come up with some sort of expected timeline where if we don't find algorithmic definitions of them by then, the hypothesis "Humans are no more powerful than Turing machines" becomes [significantly more] questionable?

We can do it now, just like we can construct neural nets that recognize piles of sand, even though the question of what constitutes a pile has been argued for thousands of years.

BTW, Luke, two of your responses came in e-mail, but didn't show up here, so that's one reason why I haven't responded. Another reason is that these discussions never end, nor do they make any progress (except, perhaps, to strengthen one person's position that the other person is utterly wrong. ;-) ) Since I've got grandkid duty this weekend all by myself, I don't have time to indulge myself.

Luke said...

@Ron:

> > Interesting; to say "true telescope" is weird, but "She's a true scientist" is not. The intersecting meaning of "truth" would be "that which is important, of which we want to obtain more". But that's a digression [for now].

> This is a quirk of the English language. The word "true" has multiple meanings. It means something completely different in the context of "true scientist" than it does in the context of "true statement." Whether or not someone is a "true scientist" (or a true Scotsman) is a matter of opinion. Whether or not an objective statement is true is not.

So you disagree with the non-bold text (that's text you excluded from your quoting)?

> > > Scientific progress and social progress both reinforce each other in a virtuous cycle.

> > From whence do you draw this conclusion?

> It seems self-evident to me. Scientific advances lead to economic progress, which in turn provides the resources to do more science. Economic progress also leads to more general social progress: the most liberal and tolerant societies are also the wealthiest. The most repressive societies are the poorest.

The stagnating median wage in America seems like a counterexample. Furthermore we could consider life extension technology and mental augmentation technology—both on the radar. Do you think they will be evenly shared by humanity? Or do you think they might actually make the power differentials of humans even more extreme?

The fact that the US is also less tolerant than Europe and yet wealthier also seems to conflict with your model. Furthermore, you might note that the top quintile of US workers can have a nicer life in the US than elsewhere. How does that impact the flow of highly talented workers in the world? I don't see this virtuous cycle nearly so clearly as you. We could also examine the various ways that the more powerful nations in the world have screwed up the rest of the world, whether via colonization or fomenting coups which made those countries more repressive (but also more responsive to the great powers).

Do you think China will liberalize politically if/​once it gets sufficiently wealthy? Or do you think it's a live option that maybe they won't, that your hypothesis might be false? You seem to be a lot more willing to let humans run on automatic than me, Ron.

> > If we did a "gene-knockout" on chocolate ice cream, would it destroy science? Recall your "That depends on which values you remove."

> I'm afraid you totally lost me here.

You could have clicked on the link. But here it is:

> Luke: if I do the equivalent of gene-knockouts on various values, I think the practice of science would be hindered if not halted.

> Ron: That depends on which values you remove. "Respect for authority", for example, is a value that I believe is detrimental to the practice of science. So is respect for faith. (Both of these values are held in high regard on the American political right.)

Luke said...

@Ron: [correction]

Ooops, the formatting of the first chunk was supposed to go like this:

> > Interesting; to say "true telescope" is weird, but "She's a true scientist" is not. The intersecting meaning of "truth" would be "that which is important, of which we want to obtain more". But that's a digression [for now].

> This is a quirk of the English language. The word "true" has multiple meanings. It means something completely different in the context of "true scientist" than it does in the context of "true statement." Whether or not someone is a "true scientist" (or a true Scotsman) is a matter of opinion. Whether or not an objective statement is true is not.

So you disagree with the non-bold text (that's text you excluded from your quoting)?

Luke said...

@wrf3:

> Another reason is that these discussions never end, nor do they make any progress (except, perhaps, to strengthen one person's position that the other person is utterly wrong. ;-) )

I was convinced from creationism → ID → evolution via internet discussion. I was also convinced to reject what I see as standard LFW accounts via internet discussion. Maybe I'm just weird? But my extended conversation with @Peter Donis gave me sufficiently well-formed questions that I can take to friends of mine who might just be able to form mathematics for chaotic systems passing through the equivalent of unstable Lagrangian points. Is that … utterly useless?

> BTW, Luke, two of your responses came in e-mail, but didn't show up here …

Forward the response notification email to @Ron and he'll fix it.

> @Luke asked @Ron: "Can you provide algorithmic definitions of 'coherence' and 'simplicity'?"
>
> Ron said no, I'll say yes. By definition, a definition is algorithmic: a label is input, a set of labels is output.

It is curious that you launched your participation off with "I really wish these discussions were more rigorous." You've violated your own principle in being so informal with "algorithm". Here's something informal which is much better:

>> An algorithm is an effective method that can be expressed within a finite amount of space and time[1] and in a well-defined formal language[2] for calculating a function.[3] Starting from an initial state and initial input (perhaps empty),[4] the instructions describe a computation that, when executed, proceeds through a finite[5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.[7] (WP: Algorithm)

What you really want, though, is something like the Turing machine specification, or something equivalent in power to it. Without such rigor, sloppiness lets you get away with just about anything.

> this particular algorithm doesn't need to be precise

Is a non-precise algorithm actually an algorithm? Or is the … "fuzz" actually exceedingly important in stoking scientific revolutions? Without precision, we cannot answer such questions—we can't really even think them.

> The description of Kolmogorov complexity is computable. When humans talk about infinity, they aren't doing infinite things. They are using finite, computable -descriptions- of infinity.

So scientists aren't computing complexity (simplicity)?

> … we can construct neural nets that recognize piles of sand …

Imitation is not the same as [unbounded] innovation. See WP: Cargo cult. Neural nets do best when humans have collected orders of magnitude more examples than humans need to learn the thing, and use it to train them. Neural nets aren't going to draw a straight line with y-intercept = 0 through Hubble's original data.

Ron said...

@Luke:

> So you disagree with the non-bold text (that's text you excluded from your quoting)?

To wit (AFAICT):

"The intersecting meaning of "truth" would be "that which is important, of which we want to obtain more". But that's a digression [for now]."

I agree it's a digression. I vehemently disagree that "truth" can plausibly be taken to mean "that which is important, of which we want to obtain more." Money is important, and most people want to obtain more. But conflating money and truth is absurd. (To say nothing of the fact that many people actively eschew truth. As a former creationist you should know that better than anyone.)

> The stagnating median wage in America seems like a counterexample.

Really? It seems like confirmation to me. The U.S. has been turning its back on science since the Reagan administration. From climate change denialism, to the cancellation of the SSC, to the unrelenting efforts to get evolution out of public schools and prayer back in, the political right has been waging a largely successful war, not just on science, but on truth in general. For example, they have been successfully promulgating the myth that rich people create jobs, that high taxes on rich people destroy jobs, and that strong unions are bad for the economy. All of these things are demonstrably false, and yet many people believe them, and government policy has followed. It is hardly surprising that the middle class would be eviscerated. It's exactly what one would expect under these circumstances. It's *by design*.

> The fact that the US is also less tolerant than Europe and yet wealthier

Yes, of course the correlation is not perfect. All kinds of other factors influence wealth. The wealthiest nation on earth per-citizen is Qatar, which is hardly a bastion of science or tolerance (though they do have a one of the best news organizations in the world today, which says more about the sorry state of the rest of the world than it does about Qatar). But in the large, nations that embrace science (China, Korea, Japan, Israel, Germany, Sweden) do better than those that don't (Afghanistan, Egypt, Venezuela, Pakistan).

> Do you think China will liberalize politically if/​once it gets sufficiently wealthy?

It already has. (See e.g. https://en.wikipedia.org/wiki/LGBT_rights_in_China) And I think the trend will continue.

> You could have clicked on the link.

I did. You still lost me. Chocolate ice cream is not a value.

Luke said...

@Ron:

> I agree it's a digression. I vehemently disagree that "truth" can plausibly be taken to mean "that which is important, of which we want to obtain more." Money is important, and most people want to obtain more. But conflating money and truth is absurd. (To say nothing of the fact that many people actively eschew truth. As a former creationist you should know that better than anyone.)

You so quickly deconstruct claims like this; are you sure you're using valid reasoning? Here, for example, what if we rank-order the most important things? Is the top truth, or is it the scientific values/​qualities required to obtain truth? Is the second one of those two, or money? Is money really in the same category? But if you feel this is all too much of a digression, feel free to ignore it. It's just a bit tedious when you question my intellectual integrity so frequently and strongly.

> Really? It seems like confirmation to me. The U.S. has been turning its back on science since the Reagan administration.

Do you really want to say that the Right has had that much power, and the Left that little power?

> It is hardly surprising that the middle class would be eviscerated. It's exactly what one would expect under these circumstances. It's *by design*.

By design of the Right, the Left, or both? (I'll not pursue this and argue with you; the only way I know to really test these issues is to build a website that allows massive collaboration in accumulating evidence, developing models, and exposing it all to criticism. I'm just interested in your judgment/​opinion on the matter.)

> Yes, of course the correlation is not perfect.

Sure; errors can be falsification and they can also call for ad hoc hypotheses. Not all ad hoc hypotheses are bad. What surprises me is simply the great amount of confidence you had—so much that you said it was "self-evident".

> But in the large, nations that embrace science (China, Korea, Japan, Israel, Germany, Sweden) do better than those that don't (Afghanistan, Egypt, Venezuela, Pakistan).

How about this explanation: "Those with more science and technology can extract more of the wealth and concentrate it." That has little to do with social improvement and much to do with raw power.

> > Do you think China will liberalize politically if/​once it gets sufficiently wealthy?

> It already has. (See e.g. https://en.wikipedia.org/wiki/LGBT_rights_in_China) And I think the trend will continue.

It has moved a tiny bit towards current Western standards. Suppose, for sake of argument, the trend does not continue. Would that constitute sufficient falsifying evidence for your hypothesis?

> > You could have clicked on the link.

> I did. You still lost me. Chocolate ice cream is not a value.

Scientists liking chocolate ice cream is an opinion and you said "Values are a proper subset of opinions." Recall you said:

> Ron: True, I suppose. Non-scientists who like chocolate ice cream are among those instruments as well. I really don't see a point here.

My point is that this kind of value (liking chocolate ice cream) is 100% irrelevant to the discussion at hand. It's like the logo on the microsocope that my wife uses to do science: were it different, the science could continue unabated. That was my point, here:

> Luke: if I do the equivalent of gene-knockouts on various values, I think the practice of science would be hindered if not halted.

Am I making any more sense, or shall I give up?

wrf3 said...

@Luke wrote, "I was convinced ... Is that … utterly useless?"
In the context of "these discussions" (i.e. free will, God, etc...), yes. And it gets worse.

@Luke: "It is curious that you launched your participation off with "I really wish these discussions were more rigorous." You've violated your own principle in being so informal with "algorithm".

Not at all. Pay attention. I said, "By definition, a definition is algorithmic." Providing a definition is algorithmic. You have an input, you have an output. You didn't add anything to the discussion by playing the pedant.

And, again, you wrote: "What you really want, though, is something like the Turing machine specification". A neural network is equivalent to a Turing machine (with the provision that a Turing machine has infinite state, humans do not. But, because humans can make use of external storage, the entire universe is our tape which, for all intents and purposes, can be considered infinite for these kinds of discussions). The neural networks in our brains provide definitions. Nothing sloppy about it. Terse, maybe, but not sloppy.

@Luke: Is a non-precise algorithm actually an algorithm?

Unfortunately, I put ambiguity in my statement although I followed up with an example to show what I meant.. When I said, "this algorithm doesn't need to be precise", you took it to mean "be precise in its operation", when the context shows "precise in its output" (because, as I added, "Humans certainly aren't. The weights in one person's network may very well produce different definitions for 'coherence' and 'simplicity' than the weights in another person's network).

However, as to "fuzz", see The Toaster Enhanced Turing Machine.

@Luke: So scientists aren't computing complexity (simplicity)?

That's not what I said. Scientists aren't doing infinite calculations. We don't compute things a computer, in principle, can't.

@Luke: Imitation is not the same as [unbounded] innovation.

First, humans don't have "unbounded" innovation. Second, imitation is innovation. After all, that's how the Turing test works. If you can't distinguish "imitation" from "the real thing" (whatever the "real thing" of intelligence happens to be), then you have the "real thing".

@Luke: "Neural nets aren't going to draw a straight line with y-intercept = 0 through Hubble's original data."
You know this, how, exactly? Computers, with far less hardware than humans have come up with novel proofs, for example.

Luke said...

@wrf3:

> You didn't add anything to the discussion by playing the pedant.

Actually, I discovered that you meant to restrict everything to what is computable. That was a valuable result. If not to you, then I'll bet to others.

> When I said, "this algorithm doesn't need to be precise", you took it to mean "be precise in its operation", when the context shows "precise in its output" (because, as I added, "Humans certainly aren't. …

Such "algorithms" ought not be trusted if they're going to be used to conclude something like "even a small Δv model of free will is impossible". It actually doesn't matter where the imprecision is for that matter.

> > Imitation is not the same as [unbounded] innovation.

> First, humans don't have "unbounded" innovation.

The link was to David Deutsch's The Beginning of Infinity. It was intended to help define the term. Do you think Deutsch's central thesis in that book is wrong?

> Second, imitation is innovation.

Cargo cult imitation is not innovation.

> If you can't distinguish "imitation" from "the real thing" (whatever the "real thing" of intelligence happens to be), then you have the "real thing".

Sure, show me algorithms and robots conducting bleeding-edge science and you'll pique my interest. Until then …

> > Neural nets aren't going to draw a straight line with y-intercept = 0 through Hubble's original data.

> You know this, how, exactly? Computers, with far less hardware than humans have come up with novel proofs, for example.

My brother-in-law happens to be a leader in the field of machine learning and we have discussed such matters. A general hypothesis-generation system (I've never heard ML folks call huge neural nets "algorithms") is the gold standard. ML folks are very far away from such a thing and are not sure they will get there with ML.


FYI wrf3, I'm not one of those people who say that computers "will never have souls". I'm just skeptical of promissory notes. I'm also skeptical to claims that we have much of a clue as to how thought and especially hypothesis generation actually works. I hope we can continually make progress on that, though.

Ron said...

@Luke and @wrf3:

FWIW: when I answered "no" to " Can you provide algorithmic definitions of 'coherence' and 'simplicity'?" I meant specifically that I personally at the present time cannot write down such algorithms (and I certainly can't do it in a blog comment). I did not mean to imply that it was impossible in principle.

I also think that my inability to do this is completely irrelevant to any discernible point that anyone could possibly be trying to make in this discussion.

> I've never heard ML folks call huge neural nets "algorithms"

And yet they are. Anything that runs on a computer is an algorithm. There was a time when mathematicians resisted calling computer-assisted proofs "proofs". They have mostly since come to their senses. I think it's only a matter of time before the ML folks do the same.

wrf3 said...

@Ron wrote: "I also think that my inability to do this is completely irrelevant to any discernible point that anyone could possibly be trying to make in this discussion."

If Luke can give an example of something that humans can do, but can't make a computer do, then humans are better than Turing machines. A result devoutly to be wished by the free-will camp.

@Ron: "Anything that runs on a computer is an algorithm". Now it's my turn to play the pedant. By definition, an algorithm terminates.

wrf3 said...

@Luke wrote: "Such "algorithms" ought not be trusted if they're going to be used to conclude something like "even a small Δv model of free will is impossible".

Why? Because any evidence that man does not have free will cannot be admitted? Or you have other reasons in mind?

@Luke: "It actually doesn't matter where the imprecision is for that matter."

Oh, but it does. For example, if the "imprecision" is due to a physical process that you don't control, can it be said that you have free will?

@Luke: "Cargo cult imitation is not innovation."

I didn't say "cargo cult". You did. The Turing test is based on imitation. I would expect the responses of my two year old grandson to be different from the responses of a "cargo cult" islander to be different from Ron. At the moment, computers are somewhere between the two year old and the islander. And, as an aside, the responses of the six-month old Norwegian Elkhound remarkably parallel that of my grandson in a number of areas.

@Luke: "Sure, show me algorithms and robots conducting bleeding-edge science and you'll pique my interest."

Surely you know that, with computers as with brains, operational complexity is related to organizational complexity. Right now, the human brain is the most organizationally complex thing we know of. We'll get there. And the reason we'll get there is because we know how the brain works. We don't need to posit anything else -- except physical complexity.

@Luke: I've never heard ML folks call huge neural nets "algorithms".
Algorithms are a subset of programs. Turing machines handle both.

@Luke: ML folks are very far away from such a thing and are not sure they will get there with ML.
I'm not sure they'll get there that way, either. The goal of ML is to find solutions to problems. But finding solutions to problems is not the essence of human intelligence. We might have more success if we use a random approach, like evolution did. YMMV.

Ron said...

> By definition, an algorithm terminates.

Nope. That would not be a very useful definition because termination is undecidable. Go look up the actual definition of "algorithm" on Wikipedia or the dictionary, and while you're at it read up on the "halting problem."

wrf3 said...

@Ron said, "Nope... Go look up the actual definition of 'algorithm' on Wikipedia..."

Here it is:

"An algorithm is an effective method that can be expressed within a finite amount of space and time[1] and in a well-defined formal language[2] for calculating a function.[3] Starting from an initial state and initial input (perhaps empty),[4] the instructions describe a computation that, when executed, proceeds through a finite[5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state."

Note footnote #5: "A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'" (Knuth 1973:5)."

Ron said...

Well, I stand corrected.

With all due respect to Don Knuth, adding this condition to the definition is not terribly useful because the question of whether a "computational procedure" is or is not an algorithm is undecidable.

Luke said...

@Ron:

> FWIW: when I answered "no" to " Can you provide algorithmic definitions of 'coherence' and 'simplicity'?" I meant specifically that I personally at the present time cannot write down such algorithms (and I certainly can't do it in a blog comment). I did not mean to imply that it was impossible in principle.

Neither do I claim they are impossible in principle. I just think we should distinguish between facts and promissory notes.

> I also think that my inability to do this is completely irrelevant to any discernible point that anyone could possibly be trying to make in this discussion.

If thinking ever involves resonance between states where it settles based on ever smaller oscillations, mathematically there could be a limit of an infinite sequence. As far as I can tell, the Turing machine formalism cannot necessarily handle such an infinity. This is related to thinking of thinking as a dynamical system; see What Might Cognition Be, If Not Computation?, which I linked earlier to @Peter Donis. I'm also drawing on mathematical biologist Robert Rosen:

>> In general, the limit of a sequence of mechanisms need not be a mechanism; the limit of a sequence of mechanical models of a system may still be a model but not a mechanical one. (Life Itself, xvii)

If you require a formal proof of this, I can probably figure it out.

> > I've never heard ML folks call huge neural nets "algorithms"

> And yet they are. Anything that runs on a computer is an algorithm. There was a time when mathematicians resisted calling computer-assisted proofs "proofs". They have mostly since come to their senses. I think it's only a matter of time before the ML folks do the same.

IIRC there is still a lot of discussion about what exactly computer-assisted proofs demonstrate. I recall discussing this precise matter with an mathematics senior at Caltech who was one of those geniuses who could effortlessly overload on courses like crazy. His issue was that computer-assisted proofs don't necessarily deliver mathematical intuition. I am fuzzy on the further details, but I suspect the problem is that it is harder to build on computer-assisted proofs.

When it comes to using the word "algorithm" with ML, I suspect part of the problem is that ML is often impenetrable in how it works—similarly to some computer-assisted proofs. But if it's impenetrable, then one cannot use it for reasoning where the reasoning can be checked at every step. Or perhaps more technically, some steps might be rather opaque and it might be questionable to just assume that everything is ok. This goes against the general idea of an algorithm and it especially goes against the idea of a valid, sound argument. Then again, one might try to play on the asymmetry between finding an answer and verifying a potential answer which would exist if P ≠ NP.

I will consult my brother-in-law about this when I see him next.

wrf3 said...

@Ron: "With all due respect to Don Knuth, adding this condition to the definition is not terribly useful because the question of whether a "computational procedure" is or is not an algorithm is undecidable."

I'm really puzzled by your take on this. Just because we can't tell whether or not some method halts in the general case doesn't mean that we can't tell whether or not a specific method halts. We can. We can prove that Euclid's method for finding a number's greatest common divisor halts. We can prove that insertion sort halts. And because we know they halt -- and can prove they halt -- we can analyze their time and space characteristics. That's why we know to never use bubble sort, except as a pedagogical bad example. That's why Knuth's Analysis of Algorithms was/is so important.

Ron said...

> Just because we can't tell whether or not some method halts in the general case doesn't mean that we can't tell whether or not a specific method halts. We can.

Sometimes. Not always. For example, it is straightforward to write a program that will systematically search for a counterexample to the Goldbach conjecture and halt if it finds one. I think it's silly to have to wait until we know whether or not the Goldbach conjecture is true before we can decide whether or not this program an algorithm.

Also, I can take a non-algorithm and turn it into an algorithm simply by forcing it to halt after it runs for a finite number of steps. This turns a non-algorithm into an algorithm even if that finite number would require more time than the heat death of the universe. So there is no possible observable difference in the I/O behavior of an algorithm and a non-algorothm. So again this seems like a silly distinction.

But I've never been a big fan of quibbling over terminology.

Luke said...

@Ron:

> But I've never been a big fan of quibbling over terminology.

That may be true, but it can be … frictious to get to the point where one realizes the discussion is quibbling over terminology. :-p Or maybe you and I just have superpowers when it comes to using terms differently. I blame the 'total depravity' misunderstanding on you, though. I myself love to quibble over certain terms like 'nucleosome', which refers not really to a thing, but to two things: a piece of DNA wrapped around a protein core. Maybe I'm a reductionist still in the closet?

In the meantime, these suffix symbols are your friends:

     †
     ‡
     *
     ′
     ″
     ‴
     ⁗

I often find that people don't mean the full extent of the term they're using, or are making a slight error, but I can internally correct with high chance of success. In cases where it's iffier, using 'LB-term' and 'OP-term' can work. I'd be interested to know why @wrf3 thinks termination of the algorithm is important. Maybe … because simulated beings would make bad computing instruments if they refused to accept death? (This is a subtle allusion to the alternate plot device for The Matrix where humans were used for computation instead of batteries.)


By the way, there's a general term for my homomorphism′, morphism:

>> In many fields of mathematics, morphism refers to a structure-preserving map from one mathematical structure to another. The notion of morphism recurs in much of contemporary mathematics. In set theory, morphisms are functions; in linear algebra, linear transformations; in group theory, group homomorphisms; in topology, continuous functions, and so on.
>>
>> In category theory, morphism is a broadly similar idea, but somewhat more abstract: the mathematical objects involved need not be sets, and the relationship between them may be something more general than a map, although has to behave similarly to maps, e.g. has to admit associative composition. (WP: Morphism)

I'm actually not sure whether to use the category theory version or more restrictive 'map' version; if we talk about what's really in the brain, the CT version might be more accurate. CT is used for thinking and brains: The Categorical Imperative: Category Theory in Cognitive and Brain Science. I've started the author's A New Foundation for Representation in Cognitive and Brain Science: Category Theory and the Hippocampus, but I haven't gotten too far yet.

«Oldest ‹Older   201 – 250 of 250   Newer› Newest»