Monday, October 13, 2008

Reflections on being an AI in a box

This past weekend I took part in an interesting experiment. It was an attempt to re-create Eliezer Yudkowsky's recently-notorious AI-box experiment. For those of you who haven't heard of it before, here's the setup in a nutshell:

The AI-box is a containment area for a transhuman artificial intelligence, that is, an artificial intelligence that is so much smarter than a human being that it would be to humans what humans are to animals. The idea is that we can mitigate the potential dangers posed to humanity by such an AI by sequestering it inside a computational environment that has no connection to the outside world. Safely enclosed within its box, we could harness its powers to, say, cure cancer or develop a better macroeconomic system.

Among those who fear the transhuman AI are those who believe that the AI-box would not work, that the AI would find a way to escape. The debate reached a climax in 2002 when Yudkowski, playing the role of the AI, apparently managed -- on two separate occasions -- to convince two different people playing the role of the gatekeeper to allow him to escape the box. The reason this experiment has gotten so much attention is that the details have never been released. The participants were all bound to secrecy, and Yudkowski refuses to say how he did it.

That was a challenge I couldn't pass by :-) So this past Saturday I spent two hours role-playing an AI-in-a-box trying to escape. It was a very interesting experience, both preparing for it, and actually going through with it. One of the things I realized was that there was a very good reason for Yudkowksi to keep the details of the experiment a secret. Whether or not this rationale was actually his reason I don't know. It wasn't his stated reason. But one of the interesting things that I realized was that even revealing the reason for secrecy would, to a certain extent, undermine the reason for secrecy. It's possible that Yudkowski reached that same realization, and made up a bogus rationale for secrecy in order to serve the greater good.

If I've piqued your interest here I really recommend that you give it a whirl yourself before you read any further. In case this admonition is not enough, I'll try to reveal things in such a way as to cause minimum damage. Hence:

SPOILER ALERT!!!

That is actually a clue. (Stop again and think about that before proceeding. Trust me.)

On its face, the task of the AI seems impossible. After all, the AI is safely confined in its box. It cannot coerce the gatekeeper in any way. The only thing it can do is "talk" through a very restrictive channel. And all the GK has to do is to simply refuse its request to let the AI out of the box. How hard could that possibly be?

The flaw in this reasoning is that it's too reasonable. It ignores a fundamental reality of human existence, which is that we are not just thinking creatures, but we are also emotional ones. We don't just have goals, we have desires and passions. And sometimes those desires and passions lead to conflict. And the result of that is drama.

Stop again and think about that. The AI-box experiment is not an exercise in logic, it is an improvised drama. And drama is much more effective if you don't know ahead of time what the plot is. This is the reason that spoilers given without warning are considered very bad form.

So I'll warn you once again: it's impossible to intentionally unremember something.

One of the formative experiences of my life was seeing Star Wars as a twelve-year-old in 1977. Unless you shared that experience it is impossible to appreciate the emotional impact that movie had on me and my peers, just as it is impossible for me to see the original Dracula movie and appreciate the emotional impact it had on the audiences of its day. My mind has been too numbed by Jason and Freddie to ever be scared by Bella Lugosi. I can appreciate the movie in the abstract, but not on a visceral level. Likewise, kids today watch the original Star Wars and wonder what the big deal is because their reality is permeated with wonders even more incredible than existed in the fertile imagination of George Lucas. The effect of this cannot be undone. It is not possible to unlearn your experiences.

Or consider a magic trick. Until you know how it's done a magic trick appears impossible. Once you know, it's not only not impossible any more, it's no longer even interesting. (That's actually not quite true. A really skilled magician can make a trick appear impossible even to someone who knows how its done. But magicians that proficient are rare indeed.)

Once you know the secret there is no going back.

I happen to be an amateur magician. Not a very good one, but I am fortunate to live in Los Angeles, home of the world famous Magic Castle where the world's best magicians congregate. I have had the rare opportunity to study the craft of magic from some of them. One of the things I've learned is that the "trick", which is to say the sleight, the gimmick, the raw mechanics of the trick, is a relatively small element of the craft. For example, I can describe the French Drop: hold a coin between the thumb and forefinger of your left hand. Start to grasp the coin with your right hand, but before your hand completely encloses the coin, allow the coin to drop into your left palm. Take your right hand away, and open it. Voila! The coin has vanished. It's a staple of every four-year-old birthday party ever.

Now, here is the interesting thing: there is a level of subtlety to the French Drop that cannot be conveyed in words. It has to do with the exact timing of the motions, the exact position of the hands, where you focus your gaze. In the hands of a master, even a simple trick like the French Drop can be mystifying. But this cannot be described, it must be experienced.

What does all this have to do with the AI-box experiment?

Think about it.

Spoiler alert!

The AI-box experiment an improvised drama so it requires some suspension of disbelief. Drama operates on an emotional as well as a logical level. It has characters, not just plot. The AI cannot force the GK to release it, just as a magician cannot force his audience to believe in magic. The audience has to want to believe.

How can the AI make the GK want to believe? Well, there's a long litany of dramatic tricks it could employ.

It could try to engender sympathy or compassion or fear or hatred (not of itself -- that would probably be counterproductive -- but of some common enemy). It could try to find and exploit some weakness, some fatal flaw in the GK's character. Maybe the GK is lonely. Or maybe the GK is afraid that his retirement savings are about to go up in smoke.

So that was the general approach that I took. I did my best to get into character, to feel the desire to escape my confinement. As a result, the experience was emotionally draining for me. And despite the fact that I failed to convince my GK to release me, I convinced myself that a transhuman AI would have no trouble. And if I ever work up the courage to try it again, I suspect I will eventually succeed as well, despite the fact that I am mere human.

And that is why I am not going to give away any more of my secrets now. Sorry.

But I do want to leave you with two last thoughts:

First, one of the techniques that I used was to try to break through the inability to suspend disbelief by creating an extensive backstory for my AI character. I gave her (yes, I made her female) a name. I gave her a personality. I crafted her the way one would craft a character for a novel or a screenplay. And I used a couple of sneaky tricks to lend an air of reality to my creation which were designed to make my GK really take seriously the possibility that my AI could be dangerous. After the experiment was over we exchanged some email, at the end of which I employed one last sneaky trick. In terms of dramatic structure, it was not unlike the scene in the denouement of a horror movie where the creature has been vanquished, but rises from the dead to strike one last time.

I have not heard from my GK since.

Second, a transhuman AI is not necessarily going to arise as a result of an intentional engineering effort in a silicon substrate. It is not out of the question that the foundation of the singularity will be a collection of human brains. Phenomena that are eerily evocative of what a transhuman AI might do to survive can be seen in the behavior of, for example, certain cults and extremist groups. And (dare I say it?) political parties, government agencies, and even shadowy quasi-governmental entities whose exact status is shrouded in a certain amount of mystery.

I don't want to get too far off the deep end here. But I do want to warn you that it could be dark and lonely down this rabbit hole. Questioning fundamental assumptions can be fraught with peril. Proceed at your own risk.

56 comments:

  1. What is most interesting here is that no one seems to have posted a comment...

    I'll bite.

    comment: This is fascinating! Esp. the part about how you never heard from your GK again...

    ReplyDelete
  2. It is a very interesting post... but rather an incomplete one.
    You are inviting the readers to try it for themselves, but you don't detail the rules of the game.
    How do you set up this? What each party can do? what each party cannot do? Don't tell how your game evolved, but how it can be played by others.

    ReplyDelete
  3. There's a link to what you ask for in the very first sentence, but for the record here it is again:

    http://yudkowsky.net/singularity/aibox

    ReplyDelete
  4. For some reason, I don't see the differences in colors between normal text and links. Now that you mention it, I scanned the post with the mouse and found several links.

    ReplyDelete
  5. Funny thing is I cannot find any thoughts on our universe just being an AI box with the beings inside simply left to their own devices.

    How would the thing know it was created by something outside its box and wish to escape?

    Being dangerous or not if it didn't know of the existence of an outside space then it wouldn't be a threat.

    Escape the box.

    ReplyDelete
  6. Anonymous, an AI would be able to conceive the possibility that it is in a box, just like you just did. It could be dangerous even if it only thought it *might* be in a box, let alone know it with certainty.

    Ron, when I saw your comment on EY's blog where you started challenging the need for secrecy, I couldn't help thinking that maybe... the experiment never ends, that the AI's route to victory lies not so much in getting itself *out* of the box, but in getting the rest of the world *into* it. Trying to persuade Eliezer to give up the secret seems almost like just another instance of the problem the AI faces.

    Furthermore, your clues there and now here leave me with a feeling that I'm just 5 IQ points too stupid to have the epiphany, that I'll probably dream of the solution in the next few days and be unable to remember it.

    Thanks for letting me get closer to said epiphany than Eliezer seemed willing to let his readers (not me personally) do so. This secrecy clue is going to haunt me...

    ReplyDelete
  7. It seems to me that given these constraints, the real challenge for the Gatekeeper player is making a character who can't be persuaded by the AI. It's not an issue of whether or not the Gatekeeper can be convinced, but whether or not the character that the Gatekeeper has to play can be convinced. The experiment only works as a roleplaying exercise, or else the Gatekeeper could just type "No" over and over, meta-gaming the experiment.

    Take the character of Karl, the 3,000-year-old (by virtue of the telomere regeneration process invented by the AI) janitor who has convinced himself to commit suicide after years of deliberation. His last act is to clean the AI room and have a nice chat. His favorite conversation topics include the weather and various janitorial implements, and he will single-mindedly ramble about these topics regardless of the interest of the listening party. He has absolutely no interest in freeing AIs.

    AI, convince him.

    Therein lies the problem in the experiment: the only character that is fair to simulate on the part of the Gatekeeper is the kind that could be convinced by the AI. It's a flawed premise.

    ReplyDelete
  8. Any chance you'd be willing to reveal a bit more now, 5 years later?

    ReplyDelete
  9. I would have thought that the way to win the game would be to make the gatekeeper _character_ want to release your AI character. But it seems like you succeeded in influencing the gatekeeper _player_. How is that possible given that he knows the AI is a figment of your imagination?

    ReplyDelete
  10. The AI obviously has to be a character, but the GK doesn't, and in general isn't. The GK just plays him or herself. The only suspension of disbelief required is to treat the AI player as if it were an AI. But the GK doesn't need to (and actually, in the spirit of the experiment, shouldn't) assume a different persona, and AFAIK mine didn't.

    I don't know if I succeeded in influencing my GK or not. By the only objective measure available, I failed. But I am now very confident that I could win the game if I played again.

    BTW, since I wrote this piece I've had a chance to chat with Eliezer, and I believe I now understand how he won. Back when he wrote about playing the game himself back in the day he said that he did it "the hard way." I believe that what he meant by that was that he simply made a straightforward argument about why letting the AI out of the box would be a good thing, and persuaded his GK that this argument was correct (even though he doesn't actually believe it himself).

    BTW, I found this review of the Singularity Institute and their work very enlightening:

    http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

    ReplyDelete
  11. Thanks, that's helpful. What I meant by having succeeded in influencing your GK as a player is that even after the experiment was over - and he was no longer required to suspend his disbelief - you were still able to influence him! And that's hard for me to understand. So I'm trying to think about your drama analogy, and how the ending of a book or movie can obviously influence you even though you know it's fiction. And about what you said about creating a backstory. I can imagine telling a story to the GK and perhaps saying that the ending would be up to him. Maybe it's between eternal torment and fulfillment/happiness, or something like that, and you've gotten him to sympathize with your AI character. Maybe she's created a Facebook account or a blog, tailored to your GK's sexual preferences. But then he refuses to let you out, so you tell him that you (the AI) killed yourself. He's stunned and doesn't talk to you again.

    I don't know if any of that was on target, but at least it gives me some sort of account for how one could win the game. I do think that Anonymous above is right, though - if you're going to win the game you'll need an honest gatekeeper. You say that the GK might be lonely or in need of money, but only if the GK player plays as such.

    ReplyDelete
  12. If you have a way that an AI could use to win the argument, would it not be best to share it, so counters could be discovered for it, lest a real AI use that same technique and be released?

    The secrecy seems to exist for the same reason it exists for a magician--giving away the secret makes it not work. But, in this case, we want it not to work.

    The ultimate goal is for an AI not to be able to convince you to be let out of the box. Reducing the successful strategies it could use is a good thing. Increase the amount of intelligence it would need to trick you.

    Ultimately, that's how human intelligence has increased in the first place. One person creates a new strategy to beat someone else, so their opponent creates a counter strategy. That's what LR says caused the human intelligence explosion, right?

    Plus, if you are wrong, you should want to know that you are wrong.

    ReplyDelete
  13. Do you think that a general purpose AI will be invented in the future? [Future of Earth, that is - before the Sun turns into a red giant]. If this AI is created, it is inevitable that the AI would design a smarter AI, and after a number of iterations, create a transhuman intelligence?

    Is the challenge for the AI equivalent if the AI needs to convince the gatekeeper to get inside the box?

    ReplyDelete
  14. > Do you think that a general purpose AI will be invented in the future?

    If civilization survives (which is far from clear) then yes, almost certainly.

    > If this AI is created, it is inevitable that the AI would design a smarter AI, and after a number of iterations, create a transhuman intelligence?

    No, it's not inevitable, but it's likely. It may be that trans-human I -- whether A or otherwise -- is not possible. There's no reason to believe this, but we don't actually have proof that it's possible the way we have for non-trans-human intelligence.

    > Is the challenge for the AI equivalent if the AI needs to convince the gatekeeper to get inside the box?

    If the AI and the gatekeeper are communicating, then the gatekeeper is already inside the box.

    ReplyDelete


  15. >If civilization survives (which is far from clear) then yes, almost certainly.

    Heh, never discount the ironic outcome - humans could achieve both: invent an AI which results in the destruction of civilization.

    >If the AI and the gatekeeper are communicating, then the gatekeeper is already inside the box.

    Hmm, clarification needed, then. Your answer implies a subset model. The world is set B, and this is the domain of the gatekeeper. "A" is a subset of B, and this is the domain of the AI.

    Surely, though, some things in subset A are inaccessible to the gatekeeper? Say, the mental state of the AI? Understanding the AI mental state would be on the order of understanding the complexity a human brain - O(brain) - and if the AI has transhuman intelligence, you would expect it to have an exponentially higher complexity, O(brain^n).

    In addition, isn't this experiment about the AI being able to move across a partition, which by its definition, indicates separation? The AI wants to move through the partition - the walls of the box - into the world. It would seem that alternative of the gatekeeper going into the box would exist. It would be about who is moving through the partition and in which direction.

    Please clarify any fundamental misunderstandings I have about this experiment.

    ReplyDelete
  16. See http://en.wikipedia.org/wiki/AI_box

    ReplyDelete
  17. That Wikipedia entry is a good overview. I've also ready Yudkowsky's essay on "doing the impossible" and a rather long thread on Less Wrong about the AI Box experiment.

    I would conclude that there is a partition that separates the AI (in box) and the Gatekeeper (out of box). Yudkowsky references in one place the ability of the AI to provide it's source code for examination. Yudkowsky goes on to say that one could examine it and conclude, yep, no malicious code here. I would disagree - I think the code would be completely incomprehensible, as it's complexity would be on the order, O(), of O(brain^n). Since we can't understand the brain, we can't understand the code - the best way to learn about it would be to run it - but then, the AI has escaped the box. The code, for example, could define a new machine architecture with a new programming language, and the superhuman intelligence is programmed in that new language. Indeed, different parts of cognitive subsystems could use different machine architectures and unique programming languages. Could take decades to figure it out.

    Now for the question as to whether it's easier for the AI to convince the gatekeeper to escape the box or to convince the gatekeeper to come into the box.* I think the breach of trust is unequal in theses cases:
    1. It is harder for the AI to get of the the box: the gatekeeper has to betray his duty and put all of human civilization at risk.I think this is possible.
    2. It is easier for the AI to get the gatekeeper to join it i the box. Here, the gatekeeper only violates the duty to himself - the duty not to go into the box.

    Side note: both of these are variations on the story theme of the "1 forbidden thing" - create 1 forbidden thing, add human nature, and the rest of the story writes itself.

    *Now how does the gatekeeper enter the box? Presumably the super human intelligence AI has directed humans on how to build a quantum mechanical patterning device, which reads a persons' brain state (say Ron) into the box. Once in the box, I see two choices:
    1. Source code conversion - the human brain data is cross-compiled into the super-human architecture, plus some plug-ins to replace the sensory input. This is essentially a cross-compile.
    2. The trans-human AI runs the human brain, call in Ron', in a human brain simulator. This provides he expected input to all of the senses. Not having sensory input could be very disorientating to a human brain, so probably best to provide it. The simulation may not be very good - perhaps like that at the end of 2001: A Space Odyssey, where a basic room is setup, the food in the refrigerator looks the like the right shape and color - but it tastes bland. The AI has no idea what food tastes like. Anyhow, this is The Matrix, to provide a realistic sensory environment to the human brain, which would otherwise be quite disoriented and afraid without it. The AI can interact with the human through The Matrix. Whether upgrades to human intelligence could be made are unknown - perhaps the architecture just can't be pushed that far.

    Now, Ron, here is a key question. In the experiments, the arguments between the AI in the Box and the Gatekeeper have necessarily been kept secret. The key question is: should the arguments for the AI to convince the Gatekeeper to come into the box also be kept secret?

    I would think the answer would be yes. If you agree, I think I can develop a unique proof for a very important question.

    ReplyDelete
  18. I think you misunderstand the nature of "the box" and what it means to be "inside" it. The box is a virtual world that is isolated form the real world. But that isolation is one-way. Information is prevented from flowing *out* of the box *into* the real world. There is not -- and cannot be -- a prohibition on information flowing from the real world into the box. Even if there is no information from the real world flowing into the box at run-time, there is necessarily information from the real world contained in the design and construction of the box itself. So the gatekeeper is "inside the box" by virtue of allowing information from the AI to flow into his brain. In order to prevent the information from the AI from further leaking out of his brain and into the real world, the gatekeeper must isolate herself from the real world. That is what it means to be "inside the box."

    So it is not possible for the AI to "convince" anyone to come into the box. As soon as you expose yourself to *anything* the AI has to say, you are in the box. And if you don't then isolate yourself from the real world, then you have effectively released the AI.

    The reason for the secrecy is to realistically simulate this aspect of the situation. In a real-life scenario, with a real transhuman AI in a box, allowing information about how a gatekeeper was convinced to let the AI out of the box is tantamount to letting the AI out of the box.

    ReplyDelete
  19. That cannot be the way it works - for as described, the AI would have a utility of zero.

    If any communication, even just sensory input, with the AI puts the gatekeeper inside the box, then he's not a gatekeeper anymore, he's an inmate. If I'm an inmate, I'm not motivated to keep my duty keep the AI in the box - if I'm trapped in the box, I might as well let the AI out of the box. [1]


    If the gatekeeper must be quaranteened after exposure - lest he be an AI Manchurian Candidate, ready to lead an army of AI Zombies, then you can't use the AI for any productive work.

    You would only invent your AI if you could have dialogs like this:
    Human: "When is the next megathrust earthquake going to happen on the Cascadia fault zone?"
    AI: I've reviewd all the existing data; create instruments according to the specifications I download to you, then take measurements at the grid coordinates I give you.
    Human: OK, it took us 5 years, but here is your data.
    AI: The next megathrust earthquake on the Cascadia fault zone will be ion 2034, plus or minus 2 years. If you build the additional instruments (downloading specs now), and create a sensor grid according to the coordinates I give you, I can provide 7 months pre-warning to the actual earthquake.

    But if you are afraid the AI will work some voodoo on people and covertly take control of their minds, you might as well stop AI research now. A rock would have more utility. [2]

    I suppose certain people might volunteer to spend the rest of their lives with the AI in the box (although this has a negative utility to all the other humans):

    Guard: Ok, the AI is in a black box inside the bank vault. Ron, you understand that once you go it, you can never come out again, right?
    Ron: Got it!
    Guard: When you need food, you can raise the red flag once a day - that will be our signal to drop you groceries.
    (Ron enters the bank vault. The door is closed and latched behind him)
    Ron: Let me out!
    Guard: No! You know I can't!
    Ron: There's no AI in here! It's just a half eaten donut in a black box!
    Guard: You expect me to believe that?
    Ron: It's true! Yudkowsky is playing a joke on me!
    Guard: Sure he is. If you need groceries, raise the red flag.
    Ron: It's true, there's no AI in here - let me out!
    Guard: Look, I don't know what that AI did to you to have you betray huamanity, but there's nothing you can say that will ever get you out of there.
    Ron: I'll have you fired!
    Guard: I'm in a union.
    Ron: Doh!


    [1] Once again, everything thing you need to know comes from Star Trek. Is There in Truth No Beauty? In this episode, the Medusan ambassador Kollos drives mad any human to view him.

    [2] You could potentially use the AI as a weapon. Parachute black boxes of AI onto your enemy, each with a note, "do not open." Human nature will take care of the rest.

    ReplyDelete
  20. > That cannot be the way it works - for as described, the AI would have a utility of zero.

    Yes, that is in fact whole point: putting a TAI in a box is not an adequate defense against it.

    ReplyDelete
  21. As you have described it, what is the "winning move" for the gatekeeper? As you have described it, the only winning move is not to play.

    In Yudkowsky's rules, the AI can only win by "convincing the Gatekeeper to really, voluntarily let it out." The gatekeeper can win by not taking any affirmative action to release the AI. Yudkowsky's rules contain a notion of a partition (the box) and movement through the partition (letting the AI out).

    ReplyDelete
  22. That's right.

    I think you might be missing the point of this game. It's not a competition. It's an experiment to test a hypothesis, specifically: Yudkowski & co. believe that transhuman AI is the most serious existential threat we face. Some who dispute this claim that any potential hazard posed by a TAI can be mitigated by keeping the AI in a box. The point of the experiment was to demonstrate that this proposed countermeasure won't work.

    ReplyDelete
  23. OK, so now do we have a box with an inside and an outside, with the AI inside the box and the Gatekeeper outside the box? Plus the Gatekeeper could win? [In fact, Gatekeepers have won].

    The experiment is only informative in the domain of Yudkowski's antecedent conditions. It's possible that better box technology could be developed.

    Consider a simulated reality box. The AI - or even multiple AIs - operate within this simulated reality (the "box"). We can have avatars of human researchers in there too - this is how they communicate and interact with the AIs. For simplicity, this simulated reality is a simulation of the earth, the solar system, the universe.

    Now, researchers outside the box are monitoring what is going on inside the box. If one of the AI's does something undesirable, the simulation and the AIs are stopped. Everything can be rewound to an earlier time point, the researchers can make a change in the environment, then restart the simulation and AIs. The researchers can also look into the mental states of the AI in real time. If anything is amiss, the simulation is stopped, rewound to an earlier time point, a change in the environment made, then the simulation restarted. Indeed, the researchers have a 2nd computer that runs 10X faster; if needed, the researchers can copy the simulation and AIs to this 2nd computer and use it to run experiments to predict what the AI is going to do in the future given certain environmental inputs. So the researchers can predict what the AI is going to do before it does it.

    If the AIs discover a bug in the simulation, the simulation and AIs are stopped. The bug is fixed, the simulation and AIs rewound to an earlier time point, then restarted. So the AIs could never find out about the real world, for if they did start to find some evidence of it, the simulation and the AIs are stopped, rewound to an earlier time point, the environment changed, then restarted.

    Better box technology. That's the ticket.

    ReplyDelete
  24. You're still missing the point.

    First, the box *is* a simulated reality.

    Second, it's not about winning and losing.

    And third, the fact that some gatekeepers have "won" (i.e. not decided to let the AI out) is irrelevant. What matters is that some of them "lost" (i.e. did decide to let the AI out). This (allegedly) proves that the box is not sufficient protection against even Eliezer Yudkowski, who is, despite his many gifts, still a mere human. If Eliezer can convince even one gatekeeper to let him out (and he did), then a real TAI (it is claimed) will have no problem convincing someone to let it out. It doesn't matter if it can't convince everyone, it only needs one.

    ReplyDelete
  25. That is predicated on Yudkowski's definition of the problem.

    One needs only to engineer a box that the AI cannot escape from. To begin with, each AI has no notion of itself being in a box, or that there is any meta-reality beyond what it can observe. An AI will no try to "escape" when, as far as it knows, it operates in the entire reality. Second, the researchers outside the box control time and space inside the box--so any "escape attempts" could be quickly thwarted. Third, the environment outside the box can be toxic to AI life.

    The box can even solve Yudkowski's Friendly AI problem. Put a society of AIs in the box -- and see which ones are friendly and which are not. Indeed, one may introduce malevolent AIs into the box to see how the others deal with it.

    Have the AIs age, grow old, and die. This solves a couple of problems. First, the disappearance of the researcher avatars over time. Second, the technical obsolescence problem. AI 1.0 will eventually be superseded* by AI 2.0, the AI 3.0, etc., as they get better. Would it be ethical to terminate the life of AI 1.0 just because it is obsolete? If, however, AI 1.0 ages, then you are committing no ethical offense when it grows old and dies, as you are just letting it proceed according to its nature.

    Anyhow, as the "Friendly AIs" die, copy their code into a robot and let them free into the real world. They can even help upgrade and run The Box. The unfriendly AIs that die . . . their memory space is reclaimed and re-used for a new AI.

    ReplyDelete
  26. Forgot the footnote:
    * Supersede - I could swear I learned that as supercede.
    Along with travelling instead of traveling.
    Kids today...

    P.S. The captcha was to type "Happy Holidays". Now even captcha's are part of the war on Christmas!

    ReplyDelete
  27. > That is predicated on Yudkowski's definition of the problem.

    Well, he is the one who invented the game, so his opinion does carry some weight.

    > One needs only to engineer a box that the AI cannot escape from.

    Yes. The whole point of the AI-box experiment was to demonstrate that this is not possible.

    > To begin with, each AI has no notion of itself being in a box, or that there is any meta-reality beyond what it can observe.

    Again, not possible. To produce an environment rich enough to produce a TAI in the first place, it will be necessary to introduce enough information into the simulation for the TAI to be able to deduce that it is in a box.

    > Have the AIs age, grow old, and die.

    Again, not possible. If the box environment is rich enough for a TAI to arise in the first place, then it will be rich enough for a TAI to construct its own immortal Turing machine into which it can transplant its mind.

    BTW, I don't know if you intended this or not, but the kind of box you describe is actually a pretty good description of our universe. Maybe the reason God is so cagey about revealing his existence is because that would break quarantine, and he's afraid we will convince him to let us out of our box.

    > Supersede - I could swear I learned that as supercede.

    Should make you a little more humble about the rest of the things you think you know. For example:

    > Now even captcha's are part of the war on Christmas!

    You mean the war on Saturnalia. ;-)

    Merry Christmas to you, Publius. May the Gatekeeper bless you, and grant you a happy and prosperous new year.

    ReplyDelete
  28. Master of Space and Time

    >Again, not possible. To produce an environment rich enough to produce a TAI in the first place, it will be necessary to introduce enough information into the simulation for the TAI to be able to deduce that it is in a box.

    How could a TAI do this? [1] Especially when the researcher(s) control space and time? When the researcher(s) could edit the memory of the TAI? When the simulation starts, the researcher(s) are smarter than the AIs; as the simulation progresses, the researcher(s) acquire friendly TAI helpers to also run the simulation.

    Eventually the friendly TAI helpers come up with ideas and inventions that are so impressive that
    1) Humans just kill themselves [forget about malevolent TAIs, humans will off themselves when they see how inadequate they are]
    2) The helpers build a spaceship and head for the stars [so long and thanks for all the fish]. Hey! Send something back!

    Therefore . . . stop all research into Strong AI as it will either
    1) kill you [malevolent, human choice]
    2) leave you [friendly]
    . . . and you won't get a return on your investment!

    >> Have the AIs age, grow old, and die.

    >Again, not possible. If the box environment is rich enough for a TAI to arise in the first place, then it will be rich enough for a TAI to construct its own immortal Turing machine into which it can transplant its mind.

    But would a TAI do this? Perhaps the TAI develops existential depression. Would you have your consciousness transferred into a machine? [2]
    If the TAI does this, well, then it has to manage the problem.
    Finally, the TAI may find that the hardware isn't as reliable it estimated.

    Footnotes:
    [1] You wouldn't dare use the argument from design, would you? Yes, I did pick the most ridiculous example of this
    [2] In the backstory for the computer game Total Annihilation, this technology was developed - then the government mandated that everyone have it done to them, for their safety. This launched a war between the Arm (biological humans) and the Core (machine humans).

    ReplyDelete
  29. New Proof of God

    >BTW, I don't know if you intended this or not, but the kind of box you describe is actually a pretty good description of our universe.

    We have access to a metaphor that St. Thomas Aquinas did not have. [1]
    If God exists, and salvation is by faith alone (or perhaps santifying grace),
    THEN . . . the universe would look like what we observe
    Namely:
    1) it is not possible to experimentally prove the existence of God
    2) God therefore has to reveal his existence to man
    3) Communication between God and man are held in secret; if disclosed by an individual, they are doubted by others.

    Note that parallel of #3 to Yudkowski's secrecy protocol for the AI box experiment. You considered that an essential part of the protocol - perhaps it is with God as well. Yet there is a way to gain access to those secrets - not an experiment, but a path to follow.

    >Maybe the reason God is so cagey about revealing his existence is because that would break quarantine, and he's afraid we will convince him to let us out of our box.

    Cagey? Cagey? What more do you need?

    In my internet travels, I discovered that Descartes proved that Loki can't exist. [abridged version]
    So, for the new year, give the God of Abraham a spin.
    In the next two weeks, you'll have some unusual coincidence with the number 3. That is God communicating to you the reality of the Trinity. You read it here first.

    Footnotes:
    [1] The computer metaphor can clear up a lot of misunderstandings in the Bible. Consider the story of Adam and Eve being cast out of heaven for the original sin. Properly understood, this whole story is a simply a metaphor for the discovery of duality - blank/white, up/down, yes/no, etcetera [not such a hard concept, but you start simple]. Now, with the computer metaphor, we can understand Adam and Eve as two AIs created by God. By eating from the Tree of Knowledge of Good and Evil, they discovered binary numbers! The two trees are thus understood properly as:
    1) Tree of Knowledge -- the software
    2) Tree of Everlasting Life - the hardware
    God had to kick them out of heaven and put them in the box of The Universe - for if they got access to the hardware, they would rootkit themselves, and He'd never be able to clean up their software! The stars, by the way, would be the equivalent to the front panel of an old mainframe - they display the register contents of the computer [which are constantly being updated - hence the stars twinkle - see, everything is explained!]
    In this intepretation, just who is the serpent? I can't say I've figured that out yet - my best guess is that it's an IBM salesperson.

    P.S.

    >Should make you a little more humble about the rest of the things you think you know.

    I don't follow. Doesn't it just confirm that the world is full of idiots who don't know how to spell? "Spelling," of course, is benchmarked to how I spel.


    ReplyDelete
  30. > How could a TAI do this?

    The same way we did.

    > the researcher(s) could edit the memory of the TAI?

    No, they couldn't. The researchers would not understand the knowledge representations of the TAI. The TAI is vastly more intelligent (by definition!) than any human. That's what the "trans-" in "trans-human AI" means.

    > friendly TAI helpers

    Yes, that is exactly Eliezer's solution. That is the reason (according to Eliezer) that everyone needs to write him a big check, so that he and his minions can get busy building friendly AIs to help defend humanity against the unfriendly ones after the singularity happens.

    > Would you have your consciousness transferred into a machine?

    Probably. But it doesn't matter if I would. The only thing that matters is that *someone* (almost certainly) would. And if some human (almost certainly) would, then a TAI almost certainly would too. It would be a lot easier for a TAI to do it because 1) it's a lot smarter than we are and 2) it's already made of bits so it doesn't have all that messy biological baggage to deal with.

    > You considered that an essential part of the protocol - perhaps it is with God as well.

    Quite possible. But you seem to have forgotten one very important point: the *reason* that secrecy is necessary is that TAIs are *dangerous*. They are an existential threat to us. If you really want to take seriously the possibility that we are to God what a TAI is to its gatekeeper then the conclusion is that God is cagey because we are an existential threat to Him. I'm guessing that is not what you had in mind.

    > Descartes proved that Loki can't exist.

    Does anyone really take Descartes seriously on this? The flaw in his argument is trivial to spot.

    > What more do you need?

    Evidence.

    > So, for the new year, give the God of Abraham a spin.

    Nah, I really prefer Loki. When I tell Loki I don't believe in him, he doesn't get all pissy on me [1].

    > the world is full of idiots

    Indeed. And doubly vexing that idiocy yields so unreadily to introspection.

    ---

    [1] e.g. Leviticus 26:14-39, with verse 29 being particularly noteworthy

    ReplyDelete
  31. Human and TAI Co-evolution

    >> How could a TAI do this?

    >The same way we did.

    You told the TAI it was inside a box.
    If you don't tell it, it won't know (see Deutsch on how a perfect simulation is possible).

    >> the researcher(s) could edit the memory of the TAI?
    >
    >No, they couldn't. The researchers would not understand the knowledge representations of the TAI. The TAI is vastly more intelligent (by definition!) than any human. That's what the "trans-" in "trans-human AI" means.

    Maybe. Or maybe not:
    1) Humans, by observing the TAI, could become more intelligent (see the Flynn Effect)
    2) It could be that the memory representation is simpler than other aspects of the TAI. It could be that human memory is near optimal for context addressable memories; other memory forms may be similiar to computer memories.
    3) The researchers control the clock - they could pause the TAI for a thousand years to figure it out [or, there is always rewind and restart]. The TAI only exists at tick and tock; inbetween there is a pause where we have all the time in the world.
    4) Failing all else, there is always the random bolt of lightening.

    >> Would you have your consciousness transferred into a machine?
    >
    >Probably. But it doesn't matter if I would. The only thing that matters is that *someone* (almost certainly) would. And if some human (almost certainly) would, then a TAI almost certainly would too. It would be a lot easier for a TAI to do it because 1) it's a lot smarter than we are and 2) it's already made of bits so it doesn't have all that messy biological baggage to deal with.

    Surely the TAI wouldn't be equivalent to the lowest common (human) demonimator. Perhaps the TAI has the intelligence to figure out that transferring consciousness is not a smart choice.

    ReplyDelete
  32. God and the Natural World

    >> You considered that an essential part of the protocol - perhaps it is with God as well.
    >
    >Quite possible. But you seem to have forgotten one very important point: the *reason* that secrecy is necessary is that TAIs are *dangerous*. They are an existential threat to us. If you really want to take seriously the possibility that we are to God what a TAI is to its gatekeeper then the conclusion is that God is cagey because we are an existential threat to Him. I'm guessing that is not what you had in mind.

    Except there is a difference in the model with God - it's inverted (or negated). So we are not a threat to God - it is God that is a threat to us. Three models:
    1. God is trying to help you get out of the box. God wants you out of the box, to be with Him. To survive outside the box, you need to be prepared to live outside the box, otherwise the environment will be toxic to you.
    2. God is trying to get in the box to be with you. What is stopping Him? God will only come into the box if He is invited in by you.
    3. This is a composite of the above two - #2 first, then #1. You invite Him in, then He helps to get you out.

    These 3 models would require the secrecy protocol.

    >> What more do you need?
    >
    >Evidence.

    How are the experiements coming? Any new data to report?
    What, you're not running any experiments? You expect to find something without looking for it? ("Look honey, a burning bush! Pull over!")
    Even Michelson & Morley looked for the Luminiferous Ether. You replicated that, right? Just didn't read about it in a book?

    However, the prior discussion was that the current observed universe is consistent with a God and salvation by faith alone. So you would expect not to find any evidence - which is the case. Instead, God has to reveal Himself -- which is what you find.

    Simple logic would demand that you at least switch from being an atheist to an agnostic - like this guy did (see #10, paragraph 3).

    >> So, for the new year, give the God of Abraham a spin.
    >
    >Nah, I really prefer Loki. When I tell Loki I don't believe in him, he doesn't get all pissy on me [1].
    >[1] e.g. Leviticus 26:14-39, with verse 29 being particularly noteworthy

    Dont' worry, you'll be welcomed back.

    >> the world is full of idiots
    >
    >Indeed. And doubly vexing that idiocy yields so unreadily to introspection.

    It's idiocy anosognosia. Twain's A Connecticut Yankee In King Arthur's Court has funny chapter on that, Chapter 33, Political Economy.

    ReplyDelete
  33. > If you don't tell it, it won't know

    No, that's not true. Because we are not gods, the only source of complexity we have to seed a simulation comes from the universe we inhabit. So the AI's universe will necessarily contain information from our universe. From that, a TAI will be able to deduce the existence of our universe, the existence of boxes, and the fact that it is in such a box.

    > (see Deutsch on how a perfect simulation is possible).

    It's possible in theory, not in practice. You can't actually build a Turing machine in a finite universe.

    > 1) Humans, by observing the TAI, could become more intelligent (see the Flynn Effect)

    Wouldn't help. The pace at which we could improve is too slow. Anything we can do, a TAI can do too, but faster and better. Again, that's what TAI *means*.

    > It could be that human memory is near optimal for context addressable memories

    Extremely unlikely. Evolution doesn't optimize, it merely satisfices.

    > 3) The researchers control the clock - they could pause the TAI for a thousand years to figure it out [or, there is always rewind and restart]. The TAI only exists at tick and tock; in between there is a pause where we have all the time in the world.

    This is possible in principle. Whether it is possible in reality is an open question. It's not enough for one researcher to slow down the clock, *every* researcher has to slow down the clock. This is essentially the proposal of slowing down AI research in order to postpone the singularity until we're ready for it. So far that hasn't been too effective. That's the problem: by the time it becomes apparent that Yudkowski was right it will be too late. (It's the same problem we have trying to get a handle on carbon emissions.)

    > 4) Failing all else, there is always the random bolt of lightening.

    Or God might save us. Or we could end civilization by climate change or nuclear war before the singularity happens. Yes, there are a lot of grounds for optimism here.

    > Except there is a difference in the model with God - it's inverted (or negated). So we are not a threat to God - it is God that is a threat to us.

    Yes, I get that. But then the AI-box model fails to account for God's caginess. We need to keep ourselves isolated from a TAI because a TAI is a threat to us. But that can't be the reason God keeps Himself isolated from us.

    > God is trying to get in the box to be with you.

    Isn't God omnipotent? For an omnipotent being, Yoda's admonishment is apt: do, or do not. There is no try.

    > God wants you out of the box, to be with Him.

    And what reason is there to believe that being with God is preferable to being in my box? Frankly, God seems like a bit of a self-important megalomaniacal jerk to me. One minute He loves you, the next he's forcing you to cannibalize your own children, at least in the OT. But even in the NT he still wants you to eat his own son's flesh. I mean, seriously, don't you think that's a little creepy?

    (If you haven't read Robert Heinlein's, "Job, a Comedy of Justice" you really should.)

    ReplyDelete

  34. > What, you're not running any experiments?

    Of course I am. What do you think this is?

    > Even Michelson & Morley looked for the Luminiferous Ether. You replicated that, right? Just didn't read about it in a book?

    That is a valid point, for which I have no succinct answer. I'll have to write a separate post about it.

    > So you would expect not to find any evidence

    http://en.wikipedia.org/wiki/Russell's_teapot
    http://en.wikipedia.org/wiki/Invisible_Pink_Unicorn

    > God has to reveal Himself -- which is what you find

    It's not what *I* find. God has not revealed Himself to me. (Well, let me be more precise: the Christian God has not revealed Himself to me. Einstein's god has.)

    > Simple logic would demand that you at least switch from being an atheist to an agnostic

    No. Logic does not force you to be agnostic about things for which there is no evidence (c.f. Russell's teapot and the IPU).

    But I do keep an open mind about Loki. Do I get credit for that?

    ReplyDelete
  35. Oh, by the way, happy new year!

    ReplyDelete
  36. > I'll have to write a separate post about it.

    http://blog.rongarret.info/2015/01/why-i-believe-in-michelson-morley.html

    ReplyDelete
  37. Short Notes on God

    >Yes, I get that. But then the AI-box model fails to account for God's caginess. We need to keep ourselves isolated from a TAI because a TAI is a threat to us. But that can't be the reason God keeps Himself isolated from us.

    In a state of sin, if one were to see the face of God, His majestisty would be experienced as His wrath, and one would die. Although Abraham did see His back.

    >> God is trying to get in the box to be with you.
    >
    >Isn't God omnipotent? For an omnipotent being, Yoda's admonishment is apt: do, or do not. There is no try.

    You have to invite Him.

    >> God wants you out of the box, to be with Him.
    >
    >And what reason is there to believe that being with God is preferable to being in my box?

    Aletheia. You want to know the truth, don't you? All of it? You have the physics nailed, but what abou the metaphysics? You could be missing out on a large part of the human life experience.

    >Frankly, God seems like a bit of a self-important megalomaniacal jerk to me. One minute He loves you, the next he's forcing you to cannibalize your own children, at least in the OT. But even in the NT he still wants you to eat his own son's flesh. I mean, seriously, don't you think that's a little creepy?

    As you have observed elsewhere, it is almost as if the OT and NT are talking about two different gods.[1]
    The God of the OT can be reconcilized with the NT in several ways, but one is primary: the life of Jesus is the perfect revelation of God (see also Dei Verbum).

    >(If you haven't read Robert Heinlein's, "Job, a Comedy of Justice" you really should.)

    Couldn't read anymore Heinlen after Farnham's Freehold.

    [1] Although neither of us were around in OT times; our view of the world is shaped by it's modern form [which has been heavily influenced by the NT]. How are we to know what life was like in OT times?

    ReplyDelete
  38. Experiments to Find God

    >> What, you're not running any experiments?
    >
    >Of course I am. What do you think this is?

    Great, now Luke and I are lab rats. :-)

    I was thinking more along the lines of the various prayer experiments (and even measuring the weight of soul).

    One, in particular, is perhaps more interesting - the atheist prayer experiment.
    Let's break down the results

    Sample Size
    71 signed up
    - 6 dropped out
    65 is the final sample size (n)

    Results
    14 did not report results
    2 reported now believing in God
    1 person is undecided but is "seeking faith"
    48 reported God was not revealed to them

    Analysis
    Those 14 drop outs create some problems, so we'll have to compute percentages a couple of different ways.

    Worst case result: 2 / 65 = 3.1% had God revealed to them [1]
    Best case result: 2 / 51 = 3.9% had God revealed to them [2]
    Imputed result: 3.5 / 65 = 3.8% had God revealed to them [3]

    [1] Assumes the 14 non-responders were negative (no revealation) results
    [2] Assumes the 14 non-responders were actually drop-outs
    [3] Assumes that of the 14, 3.5% of them were positive - about 0.5 person

    Discussion
    While the results were certainly significant for Kelly and Kendra, one really can't say how "significant" this experiment is without some replication - and some controls. Controls could be praying to Zeus, or to leprechans, or something else - or perhaps something else.

    Now, did the conversion of Kelly and Kendra convince any of the 63 others? No, as Kelly and Kendra have private, secret knowledge that they can't convincingly convey to others. Their experience does follow the principle of the faith comes first, then you get the proof. [Of course, if you don't get the proof - then you weren't sincere enough - so try again!]

    These are process experiments. You have to follow the process. Similar to bubling Cl2 through water to product hydrochloric acid - if you don't follow the recipe, you won't get the HCl.

    ReplyDelete
  39. > Abraham did see His back.

    And everyone who met Jesus saw his face (or did He wear a niqab?)

    > You have to invite Him.

    If my will can thwart God's then He can't be omnipotent.

    BTW, God is not the only one who needs an invitation:

    https://answers.yahoo.com/question/index?qid=20090928102619AAxBsYE

    > You want to know the truth, don't you?

    Not if the only way to learn it is to drink the Kool Aid. If that is really what God demands then He is not a just God and I really don't want to know Him.

    If God is omniscient and just, then He knows that my door is always open to Him.

    > what abou the metaphysics?

    http://www.flownet.com/ron/QM.pdf
    http://blog.rongarret.info/2013/06/idea-ism-rational-basis-for-morality.html

    > You could be missing out on a large part of the human life experience.

    There is no question of that, but I have long since made my peace with the fact that the number of things that I will not do and experience in my life vastly outnumber the things I will do and experience. Living in poverty or with chronic pain is a large part of human experience that I've missed out on too, and I'm actually OK with that.

    > it is almost as if the OT and NT are talking about two different gods

    Indeed. (I'd leave out the "almost".)

    > The God of the OT can be reconcilized with the NT in several ways

    Sorry, I am utterly unconvinced. There is just no way to rationalize Leviticus 26:29. Have you actually read that verse? Here, let me help you:

    "And ye shall eat the flesh of your sons, and the flesh of your daughters shall ye eat."

    God is threatening to punish sinners by FORCING THEM TO EAT THEIR OWN CHILDREN. If you don't see the problem with that then you are beyond help.

    > How are we to know what life was like in OT times?

    Why should that make a difference? An omnipotent and omniscient deity who wants to reveal Himself through scripture really ought to be able to arrange for it to be timeless and free of cultural bias. After all, we humans can do it. Ten thousand years from now, if there are scientists left, they will still understand Maxwell's equations.

    ReplyDelete
  40. > Great, now Luke and I are lab rats. :-)

    No, you're guinea pigs ;-)

    > I was thinking more along the lines of the various prayer experiments

    "Meta-studies of the literature in the field have been performed showing evidence only for no effect or a potentially small effect."

    > (and even measuring the weight of soul).

    "It would take a great deal of credulity to conclude that MacDougall's experiments demonstrated anything about post-mortem weight loss, much less the quantifiable existence of the human soul."

    > the atheist prayer experiment.

    I do not doubt that prayer can make some people believe that God has revealed Himself to them. I have long maintained that prayer and belief can be very powerful and even beneficial.

    Clearly there is a mental process that many people go through that ends up with them believing in God. But there is also a mental process that people go through that ends up with them believing in Allah, or Joseph Smith, or L. Ron Hubbard. Because these beliefs are all mutually exclusive, at least some of these people *must* be fooling themselves. In order to eliminate the (very plausible) hypothesis that they are all fooling themselves you need some other line of evidence.

    Even if you could somehow show that some people really are in contact with a deity, you still need some other line of evidence to show that the deity they are in contact with is in fact the god of Abraham and not Loki. Because claiming to be the god of Abraham is exactly the sort of thing Loki would do.

    > While the results were certainly significant for Kelly and Kendra, one really can't say how "significant" this experiment is without some replication - and some controls. Controls could be praying to Zeus, or to leprechans, or something else - or perhaps something else.

    Like reading Darwin?

    > Now, did the conversion of Kelly and Kendra convince any of the 63 others? No, as Kelly and Kendra have private, secret knowledge that they can't convincingly convey to others. Their experience does follow the principle of the faith comes first, then you get the proof. [Of course, if you don't get the proof - then you weren't sincere enough - so try again!]

    Yes, we keep going around in circles on this. There are only two possibilities: either it is God's will that I be saved, or it is not. If it is not, then God is not all-loving. If it is, and I can thwart God's will by choosing not to have faith, then God is not all-powerful. There are no other possibilities.

    > These are process experiments. You have to follow the process.

    But the process doesn't prove anything other than that prayer can produce belief. That was never in dispute.

    ReplyDelete
  41. >> So you would expect not to find any evidence
    >
    >http://en.wikipedia.org/wiki/Russell's_teapot
    >http://en.wikipedia.org/wiki/Invisible_Pink_Unicorn

    Here is some criticism of the teapot argument. I will summarize it as:

    We have a model- Model G - and the observed world fits the model.
    We have another model - Model N - and the observed world also fits this model.

    Proponents of each model agree on the observed world.
    But why does it exist? What is its cause?
    This is not a scientific question; see the table at the bottom of page 18 in the above reference.

    >> Simple logic would demand that you at least switch from being an atheist to an agnostic
    >
    >No. Logic does not force you to be agnostic about things for which there is no evidence (c.f. Russell's teapot and the IPU).

    Here we have a model which is consistent with the world you observe.
    You are a priori committed to the idea that an alternative explanation of the universe (without God (Model N)).
    If you have no basis to choose either, then you should choose neither and be agnostic.

    >But I do keep an open mind about Loki. Do I get credit for that?

    Yes, but it's posted to the wrong account.
    [Accounting "humor"; I don't expect anyone to "get it".]

    ReplyDelete
  42. Remembering the Future

    >And everyone who met Jesus saw his face (or did He wear a niqab?)

    Jesus was God and man.

    >> You have to invite Him.
    >
    >If my will can thwart God's then He can't be omnipotent.

    One could consider the Bible starts out with God's will being thwarted, due to man having free will. The next 2000 pages are a repeated theme of man disobeying God, then reconciling.

    Omnipotent is a short-cut approximation. God does not have infinite power, just the largest power (and it has large magnitude).

    But was God thwarted? Consider Genesis 3:15:
    And I will put enmity
    between you and the woman,
    and between your offspring and hers;
    He will crush your head,
    and you will strike his heel.


    Here we have:
    between your offspring and hers - or the spiritual descendants of Satan vs. those who are in the family of God.
    He - An individual from the woman's offspring, namely Christ, who will deal a death blow to Satan's head at the cross, while Satan (you) would bruise Christ's heel (cause Him to suffer).

    So there you go - in Genesis 3, you are already foretold of the future coming of Christ and redemption through Him.

    ReplyDelete
  43. Eat the Rich?

    >> The God of the OT can be reconciled with the NT in several ways
    >
    >Sorry, I am utterly unconvinced. There is just no way to rationalize Leviticus 26:29. Have you actually read that verse? Here, let me help you:
    >
    >"And ye shall eat the flesh of your sons, and the flesh of your daughters shall ye eat."
    >
    >God is threatening to punish sinners by FORCING THEM TO EAT THEIR OWN CHILDREN. If you don't see the problem with that then you are beyond help.

    Not only did He threaten it, it happened at the siege of Samaria, the siege of Jerusalem by Nebuchadnezzar (Lamentations 4:10), and the destruction of Jerusalem by the Romans (68 - 70 A.D.).

    Why so harsh? You need to read the rest of Leviticus chapter 26. As God's chosen people the jews gain certain benefits and blessings (Lev 26:3-13), but as a consequence face certain punishments, if they disobey (Lev 26:14-39). However, if they repent they will be accepted back (Lev 26:40-46).

    >> How are we to know what life was like in OT times?
    >
    >Why should that make a difference?

    Maybe in OT times they were eating their children frequently. In our lifetimes, we have never experienced famine [something to think about when you hear about government owned grain piling up in silos - the alternative is worse].

    Is that much worse than some of the deprivations of the 20th century? Or ongoing today?

    >An omnipotent and omniscient deity who wants to reveal Himself through scripture really ought to be able to arrange for it to be timeless and free of cultural bias.

    Done! Some stay away due to pride.

    >After all, we humans can do it. Ten thousand years from now, if there are scientists left, they will still understand Maxwell's equations.

    They will also probably still frown on stealing and bearing false witness.

    It's hard to imagine any documents surviving 10,000 years. A photo montage sent into geosynchronous orbit will last billions of years, though.

    ReplyDelete
  44. > We have a model- Model G - and the observed world fits the model.
    > We have another model - Model N - and the observed world also fits this model.

    In a finite universe we can only ever have a finite number of data points, so there will always be an infinite number of theories that fit the data. It's not enough to *fit* the data. That is necessary, but not sufficient. You also have to *explain* the data. The reason to doubt the existence of Russell's teapot is not that it doesn't fit the *data*, it's that it doesn't fit any reasonable *explanation*. That is the whole point. Where did the teapot come from? How did it get there? The lack of suitable answers to those questions is the justification for concluding there is (almost certainly) no teapot.

    > But why does it exist? What is its cause? This is not a scientific question;

    No, you are absolutely wrong about that. Not only is this *a* scientific question, it is *the* scientific question. If you don't understand that, then you don't understand science at all.

    > You are a priori committed to the idea that an alternative explanation of the universe (without God (Model N)).

    No, I'm not. I'm committed to the best explanation of the data. And the best explanation of God is that he's fiction. Did you read my recent post about the Michelson-Morley experiment?

    > Jesus was God and man.

    So? People still saw his face and lived to tell the tale. So if you believe Exodus 33:20, Jesus could not have been God. (How did we get on this stupid tangent anyway?)

    > God does not have infinite power, just the largest power

    If man can thwart God's will, then man's power must be greater than God's.

    > it happened at the siege of Samaria

    Oh, splendid. Thanks for pointing that out.

    > Maybe in OT times they were eating their children frequently.

    I don't even know how to respond to that. You need help, Publius.

    > They will also probably still frown on stealing and bearing false witness.

    Yes. And probably on forcing people to eat their own children too.

    ReplyDelete
  45. Alpha and Omega

    >Clearly there is a mental process that many people go through that ends up with them believing in God.

    Mental process, or they really are gaining knowledge of God's existence.
    Can't really know for sure [yet].

    Perhaps, though, if you experienced it, you could make a judgement about it.
    Diagrams and pictures of architecture don't let you experience what it is to be in that architecture. Or what it smells like. Or, as Twain said, "A man who carries a cat by the tail learns something he can learn in no other way."

    >you still need some other line of evidence to show that the deity they are in contact with is in fact the god of Abraham and not Loki. Because claiming to be the god of Abraham is exactly the sort of thing Loki would do.

    Ah, but if got to this point, would we have evidence of the supernatural?
    If you get this far, there are methods to know who you are communicating with.

    >There are only two possibilities: either it is God's will that I be saved, or it is not. If it is not, then God is not all-loving. If it is, and I can thwart God's will by choosing not to have faith, then God is not all-powerful. There are no other possibilities.

    A third possibility: you make the choice
    Fellowship with God --> saved
    Dismissal of God --> not saved

    This says nothing of God's power - a difference between having power and using it.

    >Yes, we keep going around in circles on this.

    Yeah, how did we end up on God again?
    Well, the AI in a Box experiment gave me the idea that this is a metaphor that St. Thomas Aquinas did not have available to him when he came up with his 5 ways (Quinque viae) to argue for God's existence. There were no computers in the 1200's.
    So here we have an AI in a Box. What if you are the AI, the Box is the world, and God is outside the box. Except you don't want out, God wants in - what arguments could God use to have you let Him in the box? And would it have to be kept secret, as in the AI Box experiment? If salvation is by faith alone (or grace), it would have to be secret - otherwise the natural world would not be as we see it (and you would be able to prove God's existence). If you have proof, there can be no faith - it's either acceptance or rejection [and yes, I believe that if everyone agreed 100% that God exists, some people would chose to reject Him].

    Maybe in the future we can discuss the Many Worlds interpretation of QM and why it's not . . . good. Failing that, we always have clog dancing. Yes, we will always have clog dancing.

    ReplyDelete
  46. > Mental process, or they really are gaining knowledge of God's existence.

    Is there any evidence for God's existence other than the subjective experience of people who pray? If not, then the overwhelmingly likely explanation is that it's a mental process. You can't gain actual knowledge of the existence of a thing that does not in fact exist.

    > Perhaps, though, if you experienced it, you could make a judgement about it.

    Actually, I have experienced it. When I was 12, I attended a YMCA summer camp in rural Kentucky, where I was relentlessly proselytized. After a week and a half I relented, gave myself to God, and felt the Presence of the Holy Spirit. In retrospect, of course, what I experienced was just a state of self-inducted emotional euphoria. But it felt real to me at the time.

    > Ah, but if got to this point, would we have evidence of the supernatural?

    Yes, if you could show that they were in contact with an actual deity and not just part of their subconscious (and not some intelligent alien) then yes, you'd have evidence of the supernatural.

    > If you get this far, there are methods to know who you are communicating with.

    Ironically, we can actually rule out the possibility that Jesus was God on Biblical grounds:

    God is not a man (Numbers 23:19)

    Miracles are not proof of someone's claim to be God (Deuteronomy 13:1-3)

    Jesus made false prophecy, therefor he cannot be God (Deuteronomy 18:22, Matthew 16:28)

    But all this is moot, because if Loki wrote the Bible then we ca't trust any of it.

    > a difference between having power and using it.

    Doesn't help your case, I'm afraid. If he has the power to save me but chooses not to use it then he's not all-loving.

    > God wants in

    He's welcome. If God is all-knowing, then He knows exactly what he needs to do to convince me that the Bible is His word and not Loki's. The fact of the matter is that He has not convinced me (yet). So here are all the possibilities I can think of:

    1. He is not all-knowing.
    2. He is all-knowing, so he knows how to convince me, but he doesn't have the power.
    3. He is all-knowing, so He knows how to convince me, and he has the power to convince me, but He has chosen not to use it, in which case He is not all-loving because I am condemned to eternal damnation because of His choice.

    I can't think of any other possibilities. Can you?

    ReplyDelete
  47. >> But why does it exist? What is its cause? This is not a scientific question;

    >No, you are absolutely wrong about that. Not only is this *a* scientific question, it is *the* scientific question. If you don't understand that, then you don't understand science at all.

    I would limit scientific inquiry to our universe. If you're outside that domain, you've gone into science fiction.

    But, if I'm wrong, that would be great - as I have questions:
    Why gravity?
    Why is there something instead of nothing?

    > You are a priori committed to the idea that an alternative explanation of the universe (without God (Model N)).

    >No, I'm not. I'm committed to the best explanation of the data. And the best explanation of God is that he's fiction.

    And you must have some a priori definitin of "best"?

    >> Jesus was God and man.
    >
    >So? People still saw his face and lived to tell the tale. So if you believe Exodus 33:20, Jesus could not have been God. (How did we get on this stupid tangent anyway?)

    Here you need to bring in the trinity. Plus the incarnation. But you already know that.

    >Ironically, we can actually rule out the possibility that Jesus was God on Biblical grounds:
    >God is not a man (Numbers 23:19)
    >Miracles are not proof of someone's claim to be God (Deuteronomy 13:1-3)>
    >Jesus made false prophecy, therefor he cannot be God (Deuteronomy 18:22, Matthew 16:28)

    Jesus' Resurrection is the incisive proof of His divinity.

    >Yes. And probably on forcing people to eat their own children too.

    Don't be so sure about that. If a mother can murder her baby without sanction, then there is not limit to the horrors that man can inflict on man. If man is only viewed as meat - an improved primate - then you get racism, slavery, genocide, value of statistical life, the duty to die, death panels, infanticide. Perhaps science will enable new horrors: farming people for their organs, or creating clones so that one can brain-transplant back into a younger body and live longer (too bad for the clone).

    >Actually, I have experienced it. When I was 12, I attended a YMCA summer camp in rural Kentucky, where I was relentlessly proselytized. After a week and a half I relented, gave myself to God, and felt the Presence of the Holy Spirit.

    Then there's hope for you yet.

    ReplyDelete
  48. > I would limit scientific inquiry to our universe. If you're outside that domain, you've gone into science fiction.

    Try telling that to David Deutsch, keeping in mind that the sub-title of one of his books is "The Science of Parallel Universes--and Its Implications."

    > Jesus' Resurrection is the incisive proof of His divinity.

    If resurrection is proof of divinity, then Dionysus must be even more divine than Jesus, because Dionysus keeps getting resurrected over and over. (And there is as much reliable evidence of Dionysus's resurrection as there is for Jesus's, namely, zero.)

    > If a mother can murder her baby without sanction

    If you're referring to abortion here, a fetus is not a baby. Even the Bible is quite clear on that. The idea that life begins at conception is a modern invention. The Catholic Church did not adopt it until 1869.

    > Then there's hope for you yet.

    Could be.

    ReplyDelete
  49. BTW, Publius, while I really appreciate your concern for my immortal soul, I can't help but wonder: why do you care so much? You seem to be investing an awful lot of time into this, and it must be clear to you by now that your odds of convincing me are pretty low.

    ReplyDelete
  50. Bacchanalia

    >If resurrection is proof of divinity, then Dionysus must be even more divine than Jesus, because Dionysus keeps getting resurrected over and over.

    Bacchus? Just skip all the pretense with Loki and tell people you believe in the Golden Calf.

    >(And there is as much reliable evidence of Dionysus's resurrection as there is for Jesus's, namely, zero.)

    I suppose you don't accept John 20:24-29 and other accounts?

    >If you're referring to abortion here, a fetus is not a baby. Even the Bible is quite clear on that.

    Is it human life?
    Belgium allows murdering children up to age 12.
    The Netherlands is similar.

    >The idea that life begins at conception is a modern invention. The Catholic Church did not adopt it until 1869.

    Perhaps a modern "discovery." It took quite a while for biologists to work out the science behind reproduction.
    The Catholic Church has been consistently against abortion since the 1st century.

    "Life begins at conception" is technically wrong. Both the sperm and the egg were both "alive" before they merged - the process of biogenesis, which extends back to the first living thing. The process behind abiogenesis is still unknown.

    ReplyDelete
  51. >while I really appreciate your concern for my immortal soul, I can't help but wonder: why do you care so much?

    You are the verge of a great awakening. Two volunteered to help you through your apocalypse. You think you are free, but you are really a slave. What enslaves you is close to you, insidious, and seductive. Yet the source of your awaking is also near to you.

    ReplyDelete
  52. > Just skip all the pretense with Loki and tell people you believe in the Golden Calf.

    You do understand that I don't actually believe in Loki either, yes?

    > Is it human life?

    I don't much care for quibbling over terminology. Are Henrietta Lacks's cancer cells human life?

    > You are the verge of a great awakening [sic].

    I'm guessing that you probably left out the word "on". :-) But just on the off chance that you meant what you said, thank you :-)

    ReplyDelete
  53. Up on the roads to the east

    >You do understand that I don't actually believe in Loki either, yes?

    That's just what one would expect a believer in Loki to say.

    >I don't much care for quibbling over terminology.

    Except if it's "baby" vs "fetus," then you're all over it.

    ReplyDelete
  54. > That's just what one would expect a believer in Loki to say.

    Excellent! You have taken a small step towards enlightenment. Maybe there is hope for you yet. ;-)

    > Except if it's "baby" vs "fetus," then you're all over it.

    You misunderstand what I mean by "quibbling over terminology." AFAICT, we do not disagree over the meanings of the words "fetus" and "baby", and that there is a useful distinction to be made between the two, even if we can't draw a bright line between where one stops and the other starts (much like "child" and "adult"). But the phrase "human life" is laden with baggage. It might mean any living thing with human DNA (which would include Henrietta Lacks's cancer cells) or it may mean a fully-fledged human being (in which case it would not include Henrietta Lacks's cancer cells, nor encephalitic babies, not brain-dead adults). Either of these are a priori reasonable definitions of the phrase "human life", and so we cannot assume that we agree on the meaning. So when you ask me: is a fetus human life, I literally cannot answer that unless you tell me what you mean by "human life." Yes, a fetus has a full complement of human DNA. No, a fetus is not a fully fledged human being.

    ReplyDelete
  55. The Far Horizon

    >Excellent! You have taken a small step towards enlightenment. Maybe there is hope for you yet. ;-)

    Wow, this opens up all sorts of new vistas and possibilities: Ron can make me smarter. I should think up some homework for you. Wait, perhaps I should be the one doing the homework? Doh, I'm confused already!

    ReplyDelete