If this claim were to pan out it should be big news because Turing machines are supposed to be "universal", that is, capable of computing any computable function. The universality of Turing machines has been the bedrock of computer science for eighty years. And yet you have almost certainly never heard of Selim G. Akl.
IMHO the obscurity in which Akl's work has languished is well-deserved, and I'm more than a little inclined to let these sleeping bogons lie. But if Publius is citing Akl that means that this is probably making the rounds in the Christian-industrial network and so someone is going to have to debunk it sooner or later. It's a dirty job, but I guess somebody's gotta do it, so it might as well be me.
It's a lot easier to talk about computability nowadays than it was in Turing's time because computing is now ubiquitous and hence familiar. Modern consumers have some intuitions about what it means for one computer to be "more powerful" than another, and some idea of what makes one computer more powerful than another: More memory. Bigger disk. Faster processor.
What about having "Intel inside"? Is there anything an Intel processor can do that (say) an AMD processor can't? It turns out the answer depends on what you mean by "do". There is nothing that an Intel processor can display on a screen that an AMD (or any other brand of) processor can't. An Intel processor might be faster, and that might matter to you, but if you disregard time, all processors are essentially equivalent, and their limitations are defined almost entirely by how much memory they have.
One the other hand, if you don't ignore time then the details of the processor architecture do matter. Some processors, for example, have more than one "core", the part of the processor that actually does the work. If a processor has only one core then it can only do one thing at once. It can give the appearance that it is doing multiple things simultaneously by switching rapidly back and forth between multiple tasks, but a processor with multiple cores can do multiple things literally at the same time, with concomitant performance improvements not just from the parallelism but also from the elimination of the switching overhead. Similarly, if you care about security, some processors provide better protections than others against programs interfering with each other, or doing things like sending your banking credentials to Romanian hackers.
What if we ignored all of these practical considerations and considered only what a computer could do in principle if we could make it as powerful as we wanted? Such a super-duper computer (SDC) would have two characteristics: it would run ridiculously fast, and it would have a ridiculous amount of memory. How fast? How much? You name it. Yottahertz. Ziggybytes. The only constraint, if we want this computer to be even remotely plausible as a model of anything that could exist in the real world, is that these numbers be finite. They can be unbounded, that is, we need not commit ourselves to any particular numbers, but they can't be infinite.
As an aside, we actually can make some plausible estimates of upper bounds for these numbers in our universe. For example, barring some major breakthrough in our understanding of physics, a real computer could never have a clock rate faster than the Planck frequency of 10^43 hertz (To put this in perspective, a single photon at this frequency has about the same energy as a ton of TNT) nor more bits of memory than there are elementary particles in the universe (about 10^80). But we are in thought-experiment land and here we are not bound by puny physical constraints. Want a googolplex of RAM? A clock rate equal to Graham's number? You got it.
It turns out that when you buy a super-duper computer in thought-experiment land the details of the architecture no longer matter. No matter what the details of your architecture, all super-duper computers have the exact same limitations. And yes, super-duper computers do have limitations, the most famous of which is that they cannot compute the halting function for super-duper computers.
Note that I did not say that super-duper computers can't "solve the halting problem." This is because there is no such thing as "the" halting problem; there are a lot of different halting problems, an infinite number of them in fact. They are all of the form: will a given program halt or run forever on some particular input on some particular computer? Notice that there are three parameters here: the program, the input, and the computer. There are many (in fact, an infinite number of) halting problems for which a solution is easily found. But to compute the halting function for a particular computer you have to solve all of the instances of the halting problem for that computer.
Turing's central result with respect to the halting problem is not that the halting function can't be computed, it is that no computer can compute its own halting function. I'm not going to go through the proof (you can find a concise presentation at the bottom of this page). The only thing that matters is that the proof is completely general. It applies to any model of computation, not just Turing machines.
Note in particular that the result does not apply to one computer computing the halting function for a different computer. A super-duper computer can easily compute the halting function for your MacBook Pro. This is because your MBP only has a paltry terabyte of disk space and a few measly gigabytes of RAM and runs as a snails-pace of a few gigahertz. A super-duper computer can, for example, simulate your MBP and keep track of all the states that the MBP goes through as it runs until one of two things happens: either the program halts, or the machine enters a state that it was previously in. If the latter happens, the MBP is in an infinite loop. Because there are only a finite number of possible states your MBP can be in, one or the other must happen sooner or later.
Now, the cost of actually carrying out this procedure is ridiculously high. An SDC trying to compute the halting function for a machine with N bits of memory would need 2^N bits of memory. But this doesn't matter. We're in thought-experiment-land where resources are free. We can have as much memory as we need as long as it's finite, and 2^N is finite for any finite N. There might be better, more efficient algorithms for computing the halting function, but Turing's proof shows that such an algorithm would require M>N bits of memory to compute the halting function for a machine with N bits of memory.
Can we imagine a meta-super-duper computer that could compute the halting function for a super-duper computer? Remember, an SDC's memory is unbounded. At any given point in time an SDC can only be using a finite amount of memory, but there is no upper bound on this finite number the way there is for your MBP. An MBP can be "maxed out" to the point where you can't add any more memory to it. But an SDC has no such constraint. We know (per Turing) that to compute the halting function on an SDC we somehow need a machine that is more powerful, and it's not clear how we would get more powerful than an SDC.
Happily, we don't actually need to say how we would do it. We can just define a meta-super-duper-computer as one that can compute the halting function for an SDC somehow without actually specifying how it does this. This is a legal maneuver in thought-experiment land. It's actually no different than how we defined an SDC in the first place. We never said how an SDC has access to an unbounded amount of memory, only that it does.
As an aside, we actually can describe how an MSDC could compute the halting function for an SDC. For example, one possibility is to say that every operation that an MSDC performs takes half the time as the preceding operation. The effectively allows an MSDC to perform an infinite number of operations in a finite amount of time, and that is enough for it to be able to compute the halting function. Filling in this kind of detail can make you feel warm and fuzzy that you're reasoning still bears some relation to reality, but it's not necessary. And in fact this is false comfort because we actually left the realm of physical reality back when we imagined an SDC, because SDCs do not and cannot really exist.
But whatever, back to thought-experiment-land. An MSDC defined in this way is called an oracle for the halting problem for SDCs. An MSDC can -- by definition! -- compute the halting function for an SDC, but not, of course, for an MSDC. Remember, Turing's proof is completely general. No computer can compute its own halting function. It applies to MSDCs and SDCs and MacBook Pros and your pocket calculator. But nothing stops us from imagining halting problem oracles and meta-oracles and meta-meta oracles, just as nothing stopped us from imagining a super-duper computer in the first place.
What does all this have to do with Turing Machines? Well, it turns out that a Turing Machine is a super-duper computer. It can run arbitrarily fast (as long as it's not infinitely fast) and it has an unbounded memory, and that's all it takes to make an SDC. This is what is meant by the slogan "Turing machines are universal." By the time you get to SDC territory, the details of the architecture no longer matter they way they do in the real world. All SDCs have the same repertoire of capabilities and restrictions, and those capabilities and restrictions are exactly those of Turing machines, and also of dozens of others of computational models that have been proposed over the years that all turn out to be equivalent to each other. Yes, there is an infinite hierarchy of more powerful machines that we can imagine on top of SDCs/TMs. But since SDCs/TMs are already vastly more powerful than anything we could ever hope to actually build, they are a useful place to stop. If a TM can't do something, no computer we can ever build can do it.
Again, just as an aside, consider for a moment the implications if we could build an oracle for the halting problem. This machine could answer any mathematical question. Want to figure out whether the Goldbach conjecture is true? Just write a TM program that systematically searches for counterexamples and feed it to a halting oracle. If the oracle says that the program runs forever, the conjecture is true.
So back to Akl. He describes three kinds of functions that he claims are computable, but not by TMs. As we have already seen, there is really not much sport in this. We already know that the halting function for TMs can't be computed by TMs, but can be computed by a halting oracle. The only thing that would be interesting is if these new functions that were not computable by TMs could be computed by something less magical than a halting oracle, a human brain for example, or a machine that we would actually have some hope of building. And in fact some of Akl's functions can be computed by real physical systems!
So why isn't this bigger news?
It's because Akl's functions are not functions. A function is a mathematical object with a very precise definition: it is a (possibly infinite) set of ordered tuples that defines a map from inputs to outputs. Sets are static. They don't change with time. But we are interested in computation, and computation is a process, and processes are inherently embedded in time (that's what makes them proccesses rather than states). Turing's model of computation is one that starts with the input to a function and ends with the output of the function having been computed without imposing any constraints on what happens in between other than that the time between the beginning and end of the process must be finite.
Akl's "functions" don't meet this criterion. The way Akl comes up with things that TMs can't do is not by defining a function that a TM can't compute, but by adding constraints to the computational process that TMs can't fulfill. Again, there is no sport in this. You don't need anywhere near Akl's level of sophistication to show an example of this. Here's a simple one: a TM cannot satisfy the constraint that the number of symbols on its tape must be even at every stage of the computation. If the number of symbols is even, then as soon as the TM writes one additional symbol to the tape the constraint is violated.
But, you say, that is just so obviously stupid. You can easily fix that (assuming you would even want to) by (for example) allowing the TM to write two symbols to the tape as a single step. And you would be absolutely correct. But every one of Akl's examples amounts to nothing more than a more complicated version of this exact same thing: a constraint of some sort that a machine has to obey during the computational process that a TM as normally defined can't meet, but which a minor variation on the TM theme easily can. Specifically, all it takes is redefining what it means for a machine to make a "step", and more specifically, to allow multiple "classic" TM steps to be combined into a single meta-step.
Seriously, that is what all the fuss is about. (Now perhaps you understand why I was reluctant to take up this topic in the first place.)
But I actually think that all of the preceding discussion is a bit of a red herring, and that there is really something deeper going on here. A hint of this is provided by Publius's reference to Searle's Chinese Room, and an essay entitled "The Empty Brain" with the log-line, "Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer." Here's the thesis:
Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.Quite simply, this essay betrays a deep ignorance of how computers work. It is simply not true that data is stored in a computer in the straightforward manner that the author implies. Images, for example, are hardly ever stored in raw form, with the rows of pixels corresponding in a straightforward manner to the contents of memory. Images are invariably stored in compressed form, as jpgs or gifs, in which case the bits that represent the image appear more or less random to casual inspection. Even text is nowhere near as straightforward as the author implies. There may have been a time decades ago when all text was ascii and stored flat, but those days are long gone. Nowadays we have unicode, PDF, HTML, base64-encoded, quoted-printable, and ten thousand other text formats, none of which map text to memory contents in a straightforward way.
Now, of course, if we scan your brain we won't find gifs and jpgs and pdfs. Your brain processes information differently from your Macbook Pro. But that doesn't mean that your brain is not a computer. It (almost certainly) is. Its architecture and software are very different from your MBP, and reverse-engineering it is very difficult. But we have a pretty good handle on the basics: your brain is made of neurons that process electrical signals from your peripheral nervous system according to rules that comply with the laws of physics and so are expressible mathematically. That's pretty much all it takes to make a computer.
It is true that your brain is not a Turing machine. It is in fact much less powerful than a TM because a TM is a super-duper computer with unbounded memory and your brain isn't. It is bounded. We don't know exactly what the bound is, but there is no doubt that it's there.
It is actually not surprising that we don't fully understand our own brains. This is, in fact, impossible. We are computers so, per Turing, we can't compute our own halting function. But just because we can't do it doesn't mean it can't be done. A halting oracle for the human brain is certainly possible in principle. Whether it's possible in practice is an open question. I'll give long odds against, but I wouldn't bet my entire life savings.
Leviathan Part 1
ReplyDeleteI think your essay started out well, but it seems like you got tired towards the end and wrapped it up quite quickly!
> And yet you have almost certainly never heard of Selim G. Akl.
Quickly: name 5 computer science professors.
You might check out Dr. Akl's website. He has his significant research papers there.
>What about having "Intel inside"? Is there anything an Intel processor can do that (say) an AMD processor can't? It turns out the answer depends on what you mean by "do".
You'll find the philosophy behind "division" is different.
>It's because Akl's functions are not functions. A function is a mathematical object with a very precise definition: it is a (possibly infinite) set of ordered tuples that defines a map from inputs to outputs. Sets are static. They don't change with time. But we are interested in computation, and computation is a process, and processes are inherently embedded in time (that's what makes them proccesses rather than states). Turing's model of computation is one that starts with the input to a function and ends with the output of the function having been computed without imposing any constraints on what happens in between other than that the time between the beginning and end of the process must be finite.
Well, first you remove time -- a TM is only mapping from the input set {x1, x2, x3, ..., xn} [the domain] to the output set {y1, y2, y3, ..., yn} [the range]. The sets are static and don't change with time. If this is your definition, then certainly the human brain is not a TM (as the human brain exists in time).
However, at the end, you bring time back in computation -- the mapping has to happen in finite time.
Nonuniversality on Computation: The Myth of the Universal Computer"
Non-Universality in computation: Thirteen Misconceptions Rectified
In these papers, Akl gives 6 types of computations possible by a register machine, but not by a TM:
Computations with:
1. time-varying variables
2. time-varying computational complexity.
3. rank-varying computational complexity
4. interacting variables
5. uncertain time constraints
6. global mathematical constraints
He then reviews 13 misconceptions of his proof. You might take a look at 8, 9, and 10.
>Your brain processes information differently from your Macbook Pro. But that doesn't mean that your brain is not a computer. It (almost certainly) is. Its architecture and software are very different from your MBP, and reverse-engineering it is very difficult. But we have a pretty good handle on the basics: your brain is made of neurons that process electrical signals from your peripheral nervous system according to rules that comply with the laws of physics and so are expressible mathematically. That's pretty much all it takes to make a computer.
Leviathan Part 2
ReplyDeleteNow for the big guns:
Is the brain a digital Computer? John Searle
Searle distinguishes between 3 questions:
1. Is the brain a digital computer? [focus of this work]
2. Is the mind a computer program?
3. Can the operations of the brain be simulated on a digital computer?
He quickly dispatches #2 as obviously "No."
He also answers #3 as obviously "Yes", "at least on a natural interpretation." "That is,
naturally interpreted, the question means: Is there some description of the brain such that under that
description you could do a computational simulation of the operations of the brain. But since according
to Church’s thesis, anything that can be given a precise enough characterization as a set of steps can
be simulated on a digital computer, it follows trivially that the question has an affirmative answer. The
operations of the brain can be simulated on a digital computer in the same sense in which weather
systems, the behavior of the New York stock market or the pattern of airline flights over Latin America
can. So our question is not, “Is the mind a program?” The answer to that is, “No”. Nor is it, “Can the
brain be simulated?” The answer to that is, “Yes”. The question is, “Is the brain a digital computer?”
And for purposes of this discussion I am taking that question as equivalent to: “Are brain processes
computational?”
Note - the brain can be simulated by a digital computer, but the brain itself is not a digital computer".
At the end, #8 on the list, Searle concludes:
8. "We cannot avoid the foregoing results by supposing that the brain is doing “information processing”. The brain, as far as its intrinsic operations are concerned, does no information processing.
It is a specific biological organ and its specific neurobiological processes cause specific forms of
intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they
cause consciousness. But that is the end of the story.*"
Leviathan Part 3
ReplyDeleteIt wasn't highly emphasized at the beginning, but there is a recent research result that is said to discover a problem that only a quantum computer can solve -- and a classical computer cannot. This would appear to falsify the Church-Turing thesis (as a TM is classical).
Article in Quanta magazine:
Finally, a Problem That Only Quantum Computers Will Ever Be Able to Solve
The research paper:
Oracle Separation of BQP and PH
Leviathan Part 4
Just what is a computation, and what is a computer?
Here he have Dr. Akl presenting a TEDx talk on Unconventional Computing.
He asks the question: What is computation?
2+2?
A or B?
reading input from outside world?
producing output that affects the outside world?
limited to what a mathematical model is capable of doing?
evaluation of recursive functions?
can neurons be said to compute?
A flower opening?
Chemical reactions?
Cell multiplication, DNA replication?
electron spin?
Is everything just made of information? Everything is bits? Therefore every process is a computation?
Forget Space-Time: Information May Create the Cosmos
It likely predates this, but it's been my observation that this view (everything is bits) started taking off with the resolution of the problem of Maxwell's Demon in 1987. Maxwell's Demon, as he is opening and closing the door, has to create and destroy a few bits of memory -- and that bit destruction increases entropy equal to the entropy reduction that comes from the demon working the door.
Is this boulder doing a calculation?
Is a toilet flush valve and tank a computer?
If everything is computation and computers are everywhere, then those words lose their meaning. We'll need new words.
[Edited to fix a small but significant typo]
ReplyDelete> Quickly: name 5 computer science professors.
Knuth, McCarthy, Aho, Hopcroft, and Ullman.
(Those last three are cheating a little so I'll throw in Lamport and Kernigan. And Thompson and Richie and Aaronson and Berners-Lee and Diffie and Hellman and ...)
Akl just isn't in that league, which he absolutely would be if he'd done what he claims to have done.
> If this is your definition, then certainly the human brain is not a TM (as the human brain exists in time).
The brain is a (system emulatable by a) TM embedded in an environment. So? That doesn't change the fundamental fact that:
> the brain can be simulated by a digital computer
That's right. And that is all that matters.
> but the brain itself is not a digital computer
This is a distinction without a difference. It doesn't *matter* whether your brain "is" a digital computer (whatever that means), what matters is that it can be emulated by one, which means that your brain can't do anything that a digital computer can't do. So the I/O behavior of a digital brain can be made indistinguishable from a human brain.
That's the ball game, because once you've conceded all that, then you cannot distinguish between a human, a computer, and a philosophical zombie based on their I/O behavior. Your only recourse is to retreat into a variety of logical fallacies, like arguments from ignorance, or biological bigotry (e.g. "there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.")
> there is a recent research result that is said to discover a problem that only a quantum computer can solve -- and a classical computer cannot. This would appear to falsify the Church-Turing thesis (as a TM is classical).
Nope. You need to go read the original paper. Quanta got it wrong. Classical computers can solve forrelation, they just can't do it efficiently (and this is true even if P=NP).
See https://en.wikipedia.org/wiki/Church–Turing_thesis
"If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis..."
> Is everything just made of information? Everything is bits? Therefore every process is a computation?
Yes, yes, and yes.
> Is a toilet flush valve and tank a computer?
It can certainly be emulated by a TM (though obviously not the other way around). Whether or not you want to call a toilet valve a computer is a matter of taste. It's a finite state machine with three states, so about 1.6 bits of storage. Most people would not consider that a computer, but there's no accounting for terminological taste.
Is an abacus a computer? A four-function calculator? An Apple II? The Google cluster? The only real differences between all of these systems is their clock speed and how much memory they have.