If this claim were to pan out it should be big news because Turing machines are supposed to be "universal", that is, capable of computing any computable function. The universality of Turing machines has been the bedrock of computer science for eighty years. And yet you have almost certainly never heard of Selim G. Akl.
IMHO the obscurity in which Akl's work has languished is well-deserved, and I'm more than a little inclined to let these sleeping bogons lie. But if Publius is citing Akl that means that this is probably making the rounds in the Christian-industrial network and so someone is going to have to debunk it sooner or later. It's a dirty job, but I guess somebody's gotta do it, so it might as well be me.
It's a lot easier to talk about computability nowadays than it was in Turing's time because computing is now ubiquitous and hence familiar. Modern consumers have some intuitions about what it means for one computer to be "more powerful" than another, and some idea of what makes one computer more powerful than another: More memory. Bigger disk. Faster processor.
What about having "Intel inside"? Is there anything an Intel processor can do that (say) an AMD processor can't? It turns out the answer depends on what you mean by "do". There is nothing that an Intel processor can display on a screen that an AMD (or any other brand of) processor can't. An Intel processor might be faster, and that might matter to you, but if you disregard time, all processors are essentially equivalent, and their limitations are defined almost entirely by how much memory they have.
One the other hand, if you don't ignore time then the details of the processor architecture do matter. Some processors, for example, have more than one "core", the part of the processor that actually does the work. If a processor has only one core then it can only do one thing at once. It can give the appearance that it is doing multiple things simultaneously by switching rapidly back and forth between multiple tasks, but a processor with multiple cores can do multiple things literally at the same time, with concomitant performance improvements not just from the parallelism but also from the elimination of the switching overhead. Similarly, if you care about security, some processors provide better protections than others against programs interfering with each other, or doing things like sending your banking credentials to Romanian hackers.
What if we ignored all of these practical considerations and considered only what a computer could do in principle if we could make it as powerful as we wanted? Such a super-duper computer (SDC) would have two characteristics: it would run ridiculously fast, and it would have a ridiculous amount of memory. How fast? How much? You name it. Yottahertz. Ziggybytes. The only constraint, if we want this computer to be even remotely plausible as a model of anything that could exist in the real world, is that these numbers be finite. They can be unbounded, that is, we need not commit ourselves to any particular numbers, but they can't be infinite.
As an aside, we actually can make some plausible estimates of upper bounds for these numbers in our universe. For example, barring some major breakthrough in our understanding of physics, a real computer could never have a clock rate faster than the Planck frequency of 10^43 hertz (To put this in perspective, a single photon at this frequency has about the same energy as a ton of TNT) nor more bits of memory than there are elementary particles in the universe (about 10^80). But we are in thought-experiment land and here we are not bound by puny physical constraints. Want a googolplex of RAM? A clock rate equal to Graham's number? You got it.
It turns out that when you buy a super-duper computer in thought-experiment land the details of the architecture no longer matter. No matter what the details of your architecture, all super-duper computers have the exact same limitations. And yes, super-duper computers do have limitations, the most famous of which is that they cannot compute the halting function for super-duper computers.
Note that I did not say that super-duper computers can't "solve the halting problem." This is because there is no such thing as "the" halting problem; there are a lot of different halting problems, an infinite number of them in fact. They are all of the form: will a given program halt or run forever on some particular input on some particular computer? Notice that there are three parameters here: the program, the input, and the computer. There are many (in fact, an infinite number of) halting problems for which a solution is easily found. But to compute the halting function for a particular computer you have to solve all of the instances of the halting problem for that computer.
Turing's central result with respect to the halting problem is not that the halting function can't be computed, it is that no computer can compute its own halting function. I'm not going to go through the proof (you can find a concise presentation at the bottom of this page). The only thing that matters is that the proof is completely general. It applies to any model of computation, not just Turing machines.
Note in particular that the result does not apply to one computer computing the halting function for a different computer. A super-duper computer can easily compute the halting function for your MacBook Pro. This is because your MBP only has a paltry terabyte of disk space and a few measly gigabytes of RAM and runs as a snails-pace of a few gigahertz. A super-duper computer can, for example, simulate your MBP and keep track of all the states that the MBP goes through as it runs until one of two things happens: either the program halts, or the machine enters a state that it was previously in. If the latter happens, the MBP is in an infinite loop. Because there are only a finite number of possible states your MBP can be in, one or the other must happen sooner or later.
Now, the cost of actually carrying out this procedure is ridiculously high. An SDC trying to compute the halting function for a machine with N bits of memory would need 2^N bits of memory. But this doesn't matter. We're in thought-experiment-land where resources are free. We can have as much memory as we need as long as it's finite, and 2^N is finite for any finite N. There might be better, more efficient algorithms for computing the halting function, but Turing's proof shows that such an algorithm would require M>N bits of memory to compute the halting function for a machine with N bits of memory.
Can we imagine a meta-super-duper computer that could compute the halting function for a super-duper computer? Remember, an SDC's memory is unbounded. At any given point in time an SDC can only be using a finite amount of memory, but there is no upper bound on this finite number the way there is for your MBP. An MBP can be "maxed out" to the point where you can't add any more memory to it. But an SDC has no such constraint. We know (per Turing) that to compute the halting function on an SDC we somehow need a machine that is more powerful, and it's not clear how we would get more powerful than an SDC.
Happily, we don't actually need to say how we would do it. We can just define a meta-super-duper-computer as one that can compute the halting function for an SDC somehow without actually specifying how it does this. This is a legal maneuver in thought-experiment land. It's actually no different than how we defined an SDC in the first place. We never said how an SDC has access to an unbounded amount of memory, only that it does.
As an aside, we actually can describe how an MSDC could compute the halting function for an SDC. For example, one possibility is to say that every operation that an MSDC performs takes half the time as the preceding operation. The effectively allows an MSDC to perform an infinite number of operations in a finite amount of time, and that is enough for it to be able to compute the halting function. Filling in this kind of detail can make you feel warm and fuzzy that you're reasoning still bears some relation to reality, but it's not necessary. And in fact this is false comfort because we actually left the realm of physical reality back when we imagined an SDC, because SDCs do not and cannot really exist.
But whatever, back to thought-experiment-land. An MSDC defined in this way is called an oracle for the halting problem for SDCs. An MSDC can -- by definition! -- compute the halting function for an SDC, but not, of course, for an MSDC. Remember, Turing's proof is completely general. No computer can compute its own halting function. It applies to MSDCs and SDCs and MacBook Pros and your pocket calculator. But nothing stops us from imagining halting problem oracles and meta-oracles and meta-meta oracles, just as nothing stopped us from imagining a super-duper computer in the first place.
What does all this have to do with Turing Machines? Well, it turns out that a Turing Machine is a super-duper computer. It can run arbitrarily fast (as long as it's not infinitely fast) and it has an unbounded memory, and that's all it takes to make an SDC. This is what is meant by the slogan "Turing machines are universal." By the time you get to SDC territory, the details of the architecture no longer matter they way they do in the real world. All SDCs have the same repertoire of capabilities and restrictions, and those capabilities and restrictions are exactly those of Turing machines, and also of dozens of others of computational models that have been proposed over the years that all turn out to be equivalent to each other. Yes, there is an infinite hierarchy of more powerful machines that we can imagine on top of SDCs/TMs. But since SDCs/TMs are already vastly more powerful than anything we could ever hope to actually build, they are a useful place to stop. If a TM can't do something, no computer we can ever build can do it.
Again, just as an aside, consider for a moment the implications if we could build an oracle for the halting problem. This machine could answer any mathematical question. Want to figure out whether the Goldbach conjecture is true? Just write a TM program that systematically searches for counterexamples and feed it to a halting oracle. If the oracle says that the program runs forever, the conjecture is true.
So back to Akl. He describes three kinds of functions that he claims are computable, but not by TMs. As we have already seen, there is really not much sport in this. We already know that the halting function for TMs can't be computed by TMs, but can be computed by a halting oracle. The only thing that would be interesting is if these new functions that were not computable by TMs could be computed by something less magical than a halting oracle, a human brain for example, or a machine that we would actually have some hope of building. And in fact some of Akl's functions can be computed by real physical systems!
So why isn't this bigger news?
It's because Akl's functions are not functions. A function is a mathematical object with a very precise definition: it is a (possibly infinite) set of ordered tuples that defines a map from inputs to outputs. Sets are static. They don't change with time. But we are interested in computation, and computation is a process, and processes are inherently embedded in time (that's what makes them proccesses rather than states). Turing's model of computation is one that starts with the input to a function and ends with the output of the function having been computed without imposing any constraints on what happens in between other than that the time between the beginning and end of the process must be finite.
Akl's "functions" don't meet this criterion. The way Akl comes up with things that TMs can't do is not by defining a function that a TM can't compute, but by adding constraints to the computational process that TMs can't fulfill. Again, there is no sport in this. You don't need anywhere near Akl's level of sophistication to show an example of this. Here's a simple one: a TM cannot satisfy the constraint that the number of symbols on its tape must be even at every stage of the computation. If the number of symbols is even, then as soon as the TM writes one additional symbol to the tape the constraint is violated.
But, you say, that is just so obviously stupid. You can easily fix that (assuming you would even want to) by (for example) allowing the TM to write two symbols to the tape as a single step. And you would be absolutely correct. But every one of Akl's examples amounts to nothing more than a more complicated version of this exact same thing: a constraint of some sort that a machine has to obey during the computational process that a TM as normally defined can't meet, but which a minor variation on the TM theme easily can. Specifically, all it takes is redefining what it means for a machine to make a "step", and more specifically, to allow multiple "classic" TM steps to be combined into a single meta-step.
Seriously, that is what all the fuss is about. (Now perhaps you understand why I was reluctant to take up this topic in the first place.)
But I actually think that all of the preceding discussion is a bit of a red herring, and that there is really something deeper going on here. A hint of this is provided by Publius's reference to Searle's Chinese Room, and an essay entitled "The Empty Brain" with the log-line, "Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer." Here's the thesis:
Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.Quite simply, this essay betrays a deep ignorance of how computers work. It is simply not true that data is stored in a computer in the straightforward manner that the author implies. Images, for example, are hardly ever stored in raw form, with the rows of pixels corresponding in a straightforward manner to the contents of memory. Images are invariably stored in compressed form, as jpgs or gifs, in which case the bits that represent the image appear more or less random to casual inspection. Even text is nowhere near as straightforward as the author implies. There may have been a time decades ago when all text was ascii and stored flat, but those days are long gone. Nowadays we have unicode, PDF, HTML, base64-encoded, quoted-printable, and ten thousand other text formats, none of which map text to memory contents in a straightforward way.
Now, of course, if we scan your brain we won't find gifs and jpgs and pdfs. Your brain processes information differently from your Macbook Pro. But that doesn't mean that your brain is not a computer. It (almost certainly) is. Its architecture and software are very different from your MBP, and reverse-engineering it is very difficult. But we have a pretty good handle on the basics: your brain is made of neurons that process electrical signals from your peripheral nervous system according to rules that comply with the laws of physics and so are expressible mathematically. That's pretty much all it takes to make a computer.
It is true that your brain is not a Turing machine. It is in fact much less powerful than a TM because a TM is a super-duper computer with unbounded memory and your brain isn't. It is bounded. We don't know exactly what the bound is, but there is no doubt that it's there.
It is actually not surprising that we don't fully understand our own brains. This is, in fact, impossible. We are computers so, per Turing, we can't compute our own halting function. But just because we can't do it doesn't mean it can't be done. A halting oracle for the human brain is certainly possible in principle. Whether it's possible in practice is an open question. I'll give long odds against, but I wouldn't bet my entire life savings.