Sunday, October 03, 2010

The Great Conspiracy (without nuts)

I think I've figured out a way to make the Great Conspiracy more concrete without sounding like I've gone completely off the deep end. It's not really a new idea, and it's not a completely accurate description of the actual Great Conspiracy (which isn't really a conspiracy, but you'll get the idea).

There is this concept commonly credited to Vernor Vinge of the Singularity, where (to quote Wikipedia) "technological progress becomes so extremely rapid, due to positive feedback, that it makes the future after the Singularity qualitatively different and harder to predict." The Singularity Institute (not to be confused with Singularity University which is not, I have recently learned, affiliated with the Singularity Institute in any way) was founded on the premise that the Singularity will be characterized by the creation of a Superhuman Artificial Intelligence (SAI) against whose awesome intellectual powers we homo sapiens will be utterly helpless, and so we'd better lay the groundwork now to make sure that these SAI's, when -- not if -- they emerge, will be Friendly (with a capital F), that is, that they will be driven by a quality metric that aligns with ours. To fail to do this is, according to the Singularity Institute, an existential threat to human existence (c.f. SkyNet).

Suspend disbelief for a moment and accept that a non-insane person could entertain this as a possibility. The Great Conspiracy then is, in a nutshell, the idea that the Singularity is not a future possibility but has in fact already happened. And I don't mean that it has already happened in a small, metaphorical way. I mean that it has already happened in a full-blown worst-case-scenario (by Singularity Institute standards) sort of way. There already is an SAI out there, and it is carefully managing the world according to its own goals. And, of course, because it is an SAI, it is very, very effective at achieving its goals.

One of which is to avoid detection. Because the most effective way of preventing humans from turning on you is to keep them from even suspecting that you exist.

Like I said at the beginning, this version is still just a metaphor, not really the Great Conspiracy, but it's closer than the very vague version I gave at the outset. The idea in this version is to try to give more of a feel for why the real Great Conspiracy is difficult to describe. The real-world equivalent of the SAI is not actually an AI (because it isn't artificial), but it is, I think, very good at concealing itself. And one of the mechanisms it uses to maintain that concealment is by arranging for the actual truth to seem intuitively absurd, and anyone advancing to appear to be crazy. Indeed, it goes even one step further and arranges for anyone who discovers the truth to doubt their own sanity, and possibly even their own existence.

It's starting to sound like a pretty good premise for a novel, isn't it?

6 comments:

Don Geddis said...

But this isn't just a novel, for you? You believe this for real?

Why all the hesitancy? Why not just tell us your real, specific, concern/theory? You say that we'll all claim you're crazy -- which may be true. But have you really gained anything by saying that you hold a crazy belief, without being willing to share it? I'm not sure what benefit you see in being coy.

In any case, your idea of a secret all-powerful controlling entity, in fiction, reminds me of The Watcher in the book Recursion.

Ron said...

> But this isn't just a novel, for you? You believe this for real?

That depends on what you mean by "this." I am beginning to believe that metaphysical truth is very different from what nearly everyone thinks it is.

> Why all the hesitancy? Why not just tell us your real, specific, concern/theory?

Couple of reasons.

First, presentation matters. If I just walk up to a random person and say, "You know you don't really exist, right?" their reaction is going to be very different than if I carefully set the stage and talk about quantum mechanics first.

Second, although I have some pretty convincing (at least to me) arguments that all of the usual metaphysics is wrong, and some intuitions about what the Right Answer might be, I don't actually have a fully fleshed out alternative. So at this point is koans and metaphors are the best I can do. I'm actually hoping that some of the feedback I get from posting these half-baked thoughts might trigger some progress. (Actually, some of it already has.)

And third, it's more fun this way :-)

> secret all-powerful controlling entity

The GC (now there's an ironic acronym!) isn't all-powerful. This is one of the things that religious people get wrong about God. But thanks for the pointer!

Mike said...

Well, yes, a non-insane person could entertain that scenario as a logical possibility. But there's a world of difference between entertaining a scenario and actually believing it.

What I'm still not getting is any notion of *why* you you're working yourself up to believe whatever it is. How is this any different from entertaining the logical possibility that you're a brain in a vat, or that everyone else is a (philosophical) zombie, or that God made the world, fossils and all, 3000 years ago?

My first degree was in philosophy, so I don't say this for fun, but intuition and logic are pretty limited tools for deciding what to believe. You need a sanity check, whether you officially call it Science or not, and that means giving the universe every possible opportunity to prove you wrong. An argument that seems convincing *to you* counts for almost nothing. Descartes had one of those, and look what a pile of poo *that* turned out to be.

Miles said...

Ron kind of sounds like me when I realized that in all likelihood we have no free will.

Except that is logical, easy expressed and debated, and I had no problem talking to people about it.

These posts are actually a little worrying. I think you should just come out and say what you're thinking Ron.

Dennis Gorelik said...

Why would AI or SAI or any other super-entity try to hide itself from humans?
It would be exactly in reverse -- new systems/persons would actively interact with existing human civilization.
Note that if super-human AI systems existed -- they would be actively competing with each other, so spending resources on conspiracy would be very heavy burden that would cause clear lost in a very competitive race among new SAIs.

And yes: please be more clear about what exactly is your new vision. Otherwise it's getting too obfuscating and boring.

Ron said...

> What I'm still not getting is any notion of *why* you you're working yourself up to believe whatever it is.

I'm not "working myself up" to it. I'm being dragged to it kicking and screaming by the evidence.

> Why would AI or SAI or any other super-entity try to hide itself from humans?

A good question. The (oversimplified) answer is: because it's not a "super" entity, it's an alien entity (and I don't mean little green men from another planet). The GC, notwithstanding the label I've put on it, is not really a conspiracy. It's hidden in plain sight.

> I think you should just come out and say what you're thinking Ron.

> And yes: please be more clear about what exactly is your new vision. Otherwise it's getting too obfuscating and boring.

Youch! Boring! Can't have that. OK, I'll give it my best shot. Stay tuned.