1. If a proposition P is known, then P is true

2. If the conjunction P&Q is known, then P is known and Q is known

3. If P is true then it is possible to know P

4. If ~P is a logical tautology, then P is

Then you can prove:

5. If P is true, then P is known

In other words, if 1-4 are true, then all truths are known.

You can digest this result in at least two different ways:

1. It's formal proof of the existence of an omniscient being, i.e. God

2. The conclusion is clearly false, and so at least one of the premises must be false.

If, like me, you choose to cast your lot with option 2 then it makes a fun little puzzle to try to figure out which of the premises is (or are) false. You can read up on all of the different ways that philosophers have tried to resolve Fitch here (with some extra food for thought here). Personally, I think the answer is obvious and simple, and a good example of why modern philosophy needs to take computability and complexity theory more seriously than it does.

If you want to try to work it out for yourself, stop reading now. Spoiler alert.

-------

It seems pretty clear to me that the problematic assumption is #3. There are a lot of ways to argue this, but the one that I find most convincing is simply to observe that the universe is finite while there are an infinite number of potentially knowable truths. Hence, there must exist truths which cannot even be

*represented*(let alone known) in this universe because their representation requires more bits of information than this universe contains. QED.

But the problem actually runs much deeper than that. Notice how I had to sort of twist myself into linguistic knots to cast the premises in the passive voice. I started out writing premise #1 as, "If you know P, then P is true." But that raises the question: who is "you"? The modal logic in which Fitch's proof is conducted is supposed to be a model of knowledge, but it makes no reference to any knowing entities. KP is supposed to mean "P is known" but it says nothing about the knower. So in the formalism

*it is not even possible formulate the statement*, "I know P but you don't." The formalism is also timeless. If KP is true, then it is true for all time. So it is not possible to say, "I learned P yesterday." If you start with an agent-free and time-free model of knowledge then it's hardly surprising that you end up with some weird results because what you're reasoning about is some mathematical construct that bears no resemblance at all to the real world.

Real knowledge is

*a state of an agent at a particular time*, which is to say, it is a statement about

*physics*. If I say, "I know that 1+1=2", that is a statement about the state of my brain, a physical thing, and more to the point, a computational device. Hence the theory of computation applies, as does complexity theory, and my knowledge is constrained by both. So premise 3 is not only false, but it is

*provably*false, at least in this physical universe.

That would be the end of it except for two things: First, it is actually possible to carry out the proof with a weaker version of the third premise. Instead of "all truths are knowable" you can instead use:

3a: There is an unknown, but knowable truth, and it is knowable that there is an unknown, but knowable truthand still get the same result that all truths are known. That formulation seems much more difficult to dismiss on physical grounds. I'll leave that one as an exercise, but here's a hint: think about what 3a implies in terms of whether the state of knowledge in the universe is static or not. (If you really want to go down this rabbit hole you can also read up on possible-world semantics of modal logics.)

Second, there is this objection raised by Luke in our original exchange:

But one can just restrict the domain to those truths we think are knowable and re-state the entire paradox. When restricted to knowable truths, the axioms lead to the conclusion that all knowable truths are already known. Surely you don't wish to accept this conclusion?I'm going to re-state that with a little more precision:

One can just restrict the domain of the model of your modal logic to those truths that are tractably computable. Because the proof itself is formal, it is still logically valid under a change of model domain. When restricted to tractably computable truths, the axioms lead to the conclusion that all tractably computable truths are already known.Again, I'll leave figuring out why this is wrong as an exercise. Hint: look at the axioms of the modal logic of knowledge and think about whether or not the domain of tractably computable truths really is a valid model for them.

## 93 comments:

@Ron:

4. If ~P is a logical tautology, then P is falseShouldn't this be "if ~P is a logical tautology, then it is impossible to know P"?

@me:

Shouldn't this be "if ~P is a logical tautology, then it is impossible to know P"?Sorry, misremembered from the Wikipedia article. Shouldn't it actually be

"If ~P is a logical tautology, then P is not possible?"

THe point is that in modal logic, "P is not possible" and "P is false" are not the same; the latter only asserts that P is false in the particular possible world being considered, while the former asserts that P is false in all possible worlds.

@Ron:

I'll leave figuring out why this is wrong as an exercise.I don't want to post answers to the homework, but just to clarify the exercise: wouldn't it have to boil down to showing how, in a model restricted to only the tractably computable truths, at least one of the four postulates 1. through 4. that you list does not hold?

@Ron:

Instead of "all truths are knowable" you can instead use:

3a: There is an unknown, but knowable truth, and it is knowable that there is an unknown, but knowable truth

and still get the same result that all truths are known.

I'm confused about this one too. Isn't "there is a truth that is knowable, but not known" inconsistent with "all truths are known"? (Or even "all knowable truths are known"?)

@me:

Shouldn't it actually be

"If ~P is a logical tautology, then P is not possible?"

Looking at this further, I think this premise, (D), is actually where I see the first problem with the proof. As you and I are stating it here, the antecedent requires not P to be a tautology--which to me means not P must follow solely from the properties of the logical connectives.

The Wikipedia article states the antecedent as "if P can be proven false without assumptions", which does not appear to be the same thing, at least as it is used in the proof given there. In the proof, rule (D) is used to go from line 6 to line 7, but the proposition whose negation appears as line 6 was not proven solely from the properties of logical connectives: premises (A) and (B) had to be used to prove it. Don't those count as "assumptions"? If they do, then the use of premise (D) to go from 6 to 7 seems unjustified. But if (A) and (B) don't count as "assumptions", then how are "assumptions" being defined?

Restating the step from line 6 to line 7 in ordinary language gives another way of seeing the issue. Line 6 reads as

"It is not known that P is both true and unknown."

Line 7 reads as

"It is not possible to know that P is both true and unknown."

It seems like this step has already done the important work, by conflating "it is not known" and "it is not possible to know". And it seems to me that that step is not justified for the proposition given.

So I think premise (D) needs to be restricted to actual tautologies, i.e., propositions which truly follow purely from the properties of logical connectives; allowing it to be used as it is used in the proof here seems to me to be making it too strong.

> Shouldn't it actually be "If ~P is a logical tautology, then P is not possible?"

Right. My mistake. ("P is false" follows from "P is not possible", but, as you correctly point out, they are not the same.)

> I don't want to post answers to the homework, but just to clarify the exercise: wouldn't it have to boil down to showing how, in a model restricted to only the tractably computable truths, at least one of the four postulates 1. through 4. that you list does not hold?

Not necessarily. It is also possible that one of the rules of inference in the modal logic you're using is no longer valid.

> I'm confused about this one too. Isn't "there is a truth that is knowable, but not known" inconsistent with "all truths are known"?

Well, yeah. That's why it's a paradox.

> The Wikipedia article states the antecedent as "if P can be proven false without assumptions", which does not appear to be the same thing

That just the definition of "tautology", something which can be proven true without assumptions.

> the proposition whose negation appears as line 6 was not proven solely from the properties of logical connectives: premises (A) and (B) had to be used to prove it. Don't those count as "assumptions"?

No, because by the time you get to line 6 those assumptions have been discharged. Assumptions that are made for a reductio proof are provisional. They have a "lexical scope" that ends at the last step.

@Ron:

It is also possible that one of the rules of inference in the modal logic you're using is no longer valid.Ok, but how are those rules of inference used in the proof given in the Wikipedia article? I don't see them playing any role at all. The article itself says: "The proof uses minimal assumptions about the nature of K and L", and I don't see any of the rules of inference of modal logic being part of those minimal assumptions. If they are, I'd like to understand how.

Well, yeah. That's why it's a paradox.No, the point I was making is that (C'), unlike (C), appears to contradict the conclusion "all truths are known". But I see on further consideration that (C') plays a different role in the argument than (C) does. (C) is assumed as a premise and used to go from line 6 to line 7 in the proof given in the article. But (C'), if it is used, is adopted as an assumption that will end up being discharged by reductio ad absurdum. So it's not playing the same kind of role that (C) does, and the fact that it contradicts the conclusion is intended.

That just the definition of "tautology", something which can be proven true without assumptions.I don't think that's correct unless "assumptions" means "anything other than the properties of logical connectives". As I said, my understanding of "tautology" is that it is a statement that is true purely because of its logical form, without having to make any assumptions about the truth or falsity of individual propositions that appear in it. Premises (A) and (B) in the Wikipedia article are assumptions about the truth or falsity of particular propositions, and line 6 in the proof requires those assumptions; the proposition in line 6 is not true purely because of its logical form.

Even if you didn't actually mean that by "tautology", what I am proposing is that premise (D) *should* be interpreted to require that meaning of "tautology", i.e., you can only deduce not Lp from not p if not p is a tautology in the sense I just gave, i.e., true purely because of its logical form.

by the time you get to line 6 those assumptions have been discharged.The assumption made in line 1 has been discharged, yes. But (A) and (B) have not; and what I am proposing is that they *should* count as "assumptions", so that the step from line 6 to line 7 is not a valid application of (D) as I am proposing it should be interpreted.

> Ok, but how are those rules of inference used in the proof given in the Wikipedia article? I don't see them playing any role at all.

Both I and the Wikipedia article are playing a little fast and loose with our terminology and notation, particularly with the word "assumption." (The Plato article has a more precise presentation.) This because under certain circumstances (which happen to be very common) there is an equivalence between a rule of inference:

P ⊢ Q

and a conditional proposition

P -> Q

The latter is a formal statement *within* the logic, while the former is a meta-logical statement rendered in fancy symbols that simply means, "If P is a theorem then Q is a theorem." (Sometimes you will see this written as ⊢P ⇒ ⊢Q, but this can get confusing. In particular, it is important not to conflate ⇒ with ->. They are two different symbols that have very similar but not identical meanings.)

P⊢Q and P->Q are not equivalent, but they are very closely related. In particular, if you accept modus ponens:

P, P->Q ⊢ Q

(informally: if P is a theorem and P->Q is a theorem then Q is a theorem) and conditional proof:

P

---

Q ⊢ P->Q

(informally: if by assuming P is a theorem you can show that Q is a theorem then P->Q is a theorem) then there is a formal equivalence between P ⊢ Q and P -> Q, i.e. the theorems of a logic that includes MP and CP and P ⊢ Q are exactly the same as the theorems of a logic that includes MP and CP and has P->Q as an axiom.

Wikipedia presents its first three "modality rules" in the form of three axioms and one inference rule. It could have presented them all as inference rules. (Rule D cannot be converted to an axiom because there is no way to say "without assumptions" within the logic. Within the logic, a string is either a theorem, or it is not.)

I apologize for adding to the confusion by calling these "assumptions" in the OP. They are assumptions in the sense that you have to informally assume that the informal renderings of the formal statements are in some sense true (or at least plausible) otherwise the puzzle isn't very interesting.

Note that the proof also "assumes" another rule of inference (called reductio):

P

---

Q&~Q ⊢ ~P

Informally: if by assuming P is a theorem you can derive a contradiction, then ~P is a theorem.

You can prove that all of these things are "equivalent" in some sense, i.e. adding reductio to a logic that includes some minimal repertoire of other rules of inference (I forget exactly what those are) doesn't change its theorems, so in some sense reductio is just a "macro" that shortens proofs.

All of these things are generally tacitly assumed because being really rigorous about all this gets tiresome rather quickly, and detracts from the main point, which is that modal logic is just not a good model of what happens in the real world when some agent knows something.

> But (C'), if it is used, is adopted as an assumption that will end up being discharged by reductio ad absurdum.

No, C' really is just an axiom that means more or less what the informal rendering in the article says (there's a little bit of a problem with the scoping of the existential variable, but that's getting too deeply into the weeds). The fact that you can derive a contradiction from it simply means that (as noted in the article) the axiom is false, i.e. its negation is a theorem, and hence it is not the case that there exists an unknown but knowable truth.

If you think about it, this actually makes perfect sense because standard modal logic has no notion of time. In a timeless world where nothing ever changes, if a truth is unknown then it is in fact unknowable.

@Ron:

Wikipedia presents its first three "modality rules" in the form of three axioms and one inference rule. It could have presented them all as inference rules.Yes, I see that.

If you think about it, this actually makes perfect sense because standard modal logic has no notion of time. In a timeless world where nothing ever changes, if a truth is unknown then it is in fact unknowable.So basically, your position is that, if we try to construct a model that includes the notion of time, it won't satisfy all of the modal logic rules of inference?

> So basically, your position is that, if we try to construct a model that includes the notion of time, it won't satisfy all of the modal logic rules of inference?

Well, yes, obviously. You don't even need Fitch's paradox to see that. As I pointed out in the OP, in standard modal logic you can't even *say* "A knows P but B does not" or "A learned P at time T" or "A knew P but then forgot it."

The question of what happens when you try to extend modal logic to include agency and time turns out to be really hard. It's an open research issue (and widely believed to be AI-complete). The question on the table here is a simpler one: what happens if you try to extend modal logic just to include the (global) notions of computability and/or tractability?

@Ron:

The question on the table here is a simpler one: what happens if you try to extend modal logic just to include the (global) notions of computability and/or tractability?Ok, but then we're back to my earlier question: if Fitch's paradox is no longer valid when we extend modal logic to include those notions, doesn't that imply that, with those notions included, at least one of the four rules (A), (B), (C), or (D) must be invalid? I say "rules" to account for your point that really those statements are not assumptions, they're rules of inference; and my point is that I don't see any of the modal logic rules of inference (the ones described in the "Epistemic modal logic" Wikipedia article you linked to) being used in the proof of Fitch's paradox. Other than (A) through (D), the only rules I see being used are conjunction elimination, reductio ad absurdum, and the classical tautology that says that "not (p & not q)" is equivalent to "p -> q". None of those are modal logic rules of inference. So even if the modal logic rules of inference turn out to be invalid if you extend the model to include computability and/or tractability, how does that affect the proof of Fitch's paradox?

> if Fitch's paradox is no longer valid when we extend modal logic to include those notions, doesn't that imply that, with those notions included, at least one of the four rules (A), (B), (C), or (D) must be invalid?

...

> even if the modal logic rules of inference turn out to be invalid if you extend the model to include computability and/or tractability, how does that affect the proof of Fitch's paradox?

Ah, I think I see why you're confused. My fault. I wrote:

> Not necessarily. It is also possible that one of the rules of inference in the modal logic you're using is no longer valid.

But A-D *are* the modal rules of inference.

I should have said: "Not necessarily. There are other possibilities, like that propositional or first-order logic might break." (The fact that you can encode the Peano axioms as first-order logic should actually make you take that second possibility fairly seriously.)

But yes... if you find yourself able to prove something like, "If P is tractably computable then P is known" (or some variation on that theme) and you don't believe that this is actually true (it pretty clearly isn't), then something in your reasoning is failing to faithfully model reality. The puzzle is to figure out what that is.

@Ron:

A-D *are* the modal rules of inference.

I should have said: "Not necessarily. There are other possibilities, like that propositional or first-order logic might break."

Ah, ok, got it.

@Ron:

First, thanks for paying this a bit more attention.

> I'm going to re-state that with a little more precision:

>

> >> One can just restrict the domain of the model of your modal logic to those truths that are tractably computable. Because the proof itself is formal, it is still logically valid under a change of model domain. When restricted to tractably computable truths, the axioms lead to the conclusion that all tractably computable truths are already known.

>

> Again, I'll leave figuring out why this is wrong as an exercise. Hint: look at the axioms of the modal logic of knowledge and think about whether or not the domain of tractably computable truths really is a valid model for them.

Not being schooled in formal logic, the way I think about this is via the difference between complex numbers and reals: if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals. However, if I somehow guarantee that all the roots are real, it is consistent to restrict the domain to the reals—even though the reals are not algebraically closed.

I see your requiring a complete model restricted to tractable truths as tantamount to requiring that the domain be algebraically closed. And yet, if

pin steps 1. and 8. at WP: Fitch's paradox of knowability § Proof are guaranteed to be tractable, where does the equivalent of a possibly complex root enter? You objected to the following rule:(C)

p→LKpall truths are knowable.The

onlyplace that rule is applied is here:8. Suppose

p& ¬Kp9.

LK(p& ¬Kp) from line 8 by rule (C)So, unless somehow we can have this situation—

pis tractablep& ¬Kpis intractable—I don't see how I am implicitly relying on a single intractable truth. I don't care about every single theorem which can be derived in model logic; I don't need to ensure that restricting the domain to tractable truths works for those. I just care about

this particular instance.I'm reminded of the invention of Fourier analysis, which mathematicians originally lambasted because it was not initially set on firm mathematical foundations. It was not established in exactly which cases you can do what Joseph Fourier did in modeling heat flow. Fourier was wrong when he stated that any function of a variable can represented by ∑a⋅sin(xⁿ), but he was right in the particular cases he was studying. The claim that "in general, Fourier was wrong" ⇏ "in his particular cases, Fourier was wrong". It is just

wrongto strengthen a person's argument, find the strengthened version to be false, and thus conclude that the original argument is false. (In Fourier's case, he improperly generalized; I have done no such thing.)@Luke:

Your analogy with polynomials is not bad, but you made one major mistake:

> if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals

I don't know what you mean by "presuppose that I know all its roots" but there is absolutely nothing that stops you from restricting the domain of a polynomial to anything you want. The only thing that changes when you choose a domain that is not algebraically closed is that you lose the nice result that a polynomial of degree n has n roots. If you restrict to the reals, that result simply changes to be <= n roots. (You can get better results than that. For example, on the reals, odd-degree polynomials always have at least one root, but on the rationals they can have zero roots.)

> You objected to the following rule:

That's right, because it does not accurately model the real world if the domain of P is all truths. But if the domain of P is all tractable truths, then it DOES accurately model the real world -- that's the *definition* of tractability!

But when you do this, something else breaks. It's not something in the *logic*. The proof is still sound. It has to be -- nothing in the formal logic has changed, only our model, our interpretation of what the symbols *mean* has changed. So there are only two possibilities:

1. The conclusion is in fact correct in our world when you restrict the domain to tractable truths, or

2. Something about the model is false is our world

Hint: it's #2.

Just to try to avoid some confusion:

> 2. Something about the model is false is our world

Make that:

2. Something about the model is STILL false in our world EVEN IF you restrict the domain to tractable truths

@Ron:

> I don't know what you mean by "presuppose that I know all its roots"

I meant presuppose that for polynomial of degree N, I know N roots in my chosen domain D. If I say that D = ℝ, then I have a potential contradiction without somehow ensuring that all of those roots are in ℝ.

> But when you do this, something else breaks. It's not something in the *logic*. The proof is still sound.

That's not what you led me to believe in our email chain where you declared me "bordering on uncoachable". And I think you were on to something: I haven't proved that 'tractable truths' can form a model that allows you do any and all modal logic on it. Fortunately, I don't think I needed such a robust beast, for the limited thing I was trying to do.

> 2. Something about the model is false is our world

I get that the model doesn't allow multiple agents and doesn't incorporate time, but I don't see why that is an immediate problem. Tractable-Fitch still destroys the distinction between "possibly I could know tractable truth

p" and "actually I know tractable truthp". Formally, for tractable truthp:(

p→LKp) → (p→Kp)When we restrict things to just one knower at just one time, this problem rears its head. This seems to imply that with the given rules of inference, it is impossible "trace the boundaries of one's ignorance". Furthermore, with the given rules of inference, learning something is

illogical.This is all interesting to the extent that this logic is used in areas where there is supposed to be a genuine distinction between what is potentially knowable and what is in fact known. So for example, maybe we shouldn't always presuppose the following:

(B)

K(p&q) → (Kp&Kq) knowing a conjunction implies knowing each conjunct.> I meant presuppose that for polynomial of degree N, I know N roots in my chosen domain D. If I say that D = ℝ, then I have a potential contradiction without somehow ensuring that all of those roots are in ℝ.

That still makes no sense. You originally said:

> if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals

But of course you can. Consider x^2 + 1 = 0. It has two complex roots, i and -i. If I want to restrict the domain to the reals then it has no roots.

> That's not what you led me to believe in our email chain where you declared me "bordering on uncoachable".

I don't want to go back and re-hash all that, but one of your problems is that you are very sloppy in your phraseology, and this makes it very easy to misinterpret what you say (see above). It wasn't clear to me at the time that you wanted to restrict the model domain. I thought you wanted to try to prove something like:

P & TRACTABLE(P) -> KP

> Tractable-Fitch still destroys the distinction between "possibly I could know tractable truth p" and "actually I know tractable truth p".

That's right.

> When we restrict things to just one knower at just one time, this problem rears its head.

What problem? It's simply true: everything you could possibly know at a given time, you do in fact know at that time.

@Luke:

So, unless somehow we can have this situation—

p is tractable

p & ¬Kp is intractable

Which, indeed, would be a situation in which the logical argument would fail. And I don't see what is impossible or even implausible about the condition. Certainly you have given no proof that such a situation cannot occur.

@Peter:

That's actually not a valid criticism. Since Luke is the one defining the domain, he gets to define "tractable" however he likes.

In fact, the whole "tractability" thing kind of becomes a red herring when you apply it to the model domain (as opposed to trying to model it within the logic, which is what I originally thought Luke was proposing). You could apply *any* predicate to the model domain and get the same result. Fitch's "paradox" is "true" for all truths, all tractable truths, and even all snozulatable truths, so long as Fitch's axioms hold for snozulatable truths, and are themselves snozulatable truths.

But of course, none of that is particularly interesting. What is interesting is whether the domain of snozulatable truths has any bearing on reality, and to figure *that* out you have to get a more detailed grip on what snozulatable (or tractable) actually means, as well as what it actually means to "know" something. And *that* is where things start to fall apart for Fitch.

Fitch's "paradox" is kind of like non-euclidean geometry. It's true for some kinds of worlds that we can imagine. It just isn't true of the world we happen to live in. The puzzle is simply to identify, for any given variant of Fitch, where the mismatch is between what it models and our reality. For the original version, that mismatch was the assumption that all truths are knowable (because in our world they are not). You can "fix" that by restricting the model domain to tractable truths, which by some suitable definition of "tractable" make it so that they are all knowable. And then the problem shifts somewhere else (like time).

It's just not that complicated. The bottom line is that not all mathematical models actually apply to the real world, even if they are based on premises that might seem intuitively plausible.

@Ron:

That's actually not a valid criticism. Since Luke is the one defining the domain, he gets to define "tractable" however he likes.But of course, none of that is particularly interesting. What is interesting is whether the domain of snozulatable truths has any bearing on realityI was assuming that by "tractable" Luke was trying to capture some predicate that actually has a bearing on reality. For a predicate that does have a bearing on reality, I don't think that all of the propositions appearing in the proof will fall within the domain covered by the predicate.

And then the problem shifts somewhere else (like time).I'm actually not convinced that leaving out time, in itself, is the root of the problem. After all, it's not difficult to include time, and even multiple agents, in the definitions of the modal operators. For example:

Define a family of "knowledge" operators K(x, t) such that K(x, t)p means that agent x knows proposition p at time t. Then the obvious way to define the modal operator K that appears in Fitch's proof is to define Kp as the proposition: Ex,Et:K(x, t)p--i.e., proposition p is known by some agent at some time.

The problem comes with trying to differentiate this operator from the modal operator LK, since the most obvious way to define LK--the "knowable" operator--is exactly the same as the definition I just gave above for K. In other words, the implicit definition of "knowable" that is doing the work behind the scenes is: "a proposition p is knowable if it is known by some agent at some time". And of course, this makes "knowable" equivalent to "known" once you actually look under the hood.

@Peter:

> I was assuming that by "tractable" Luke was trying to capture some predicate that actually has a bearing on reality.

I don't think you can assume such things. Surely it cannot have escaped your notice that Luke's thoughts are often detached from reality.

Also, even within the confines of reality, "tractable" has a wide range of possible meaning.

> the obvious way to define the modal operator K that appears in Fitch's proof is to define Kp as the proposition: Ex,Et:K(x, t)p--i.e., proposition p is known by some agent at some time.

> the most obvious way to define LK--the "knowable" operator--is exactly the same as the definition I just gave above for K.

Possibility is usually defined in terms of possible worlds. LK would mean: in some possible world (not necessarily our world) some agent knows P at some time. The Fitch becomes: if P is tractable, then some agent in *this* world will know P.

> this makes "knowable" equivalent to "known" once you actually look under the hood.

Yes, I think this is exactly right.

One other point that hasn't been brought up yet in the context of tractability: take a look at premise D:

~P ⊢ ~LP

and think about what this means when you substitute KQ for P. (And also observe that in Fitch's proof, this is exactly how premise D is in fact used.)

@Ron: (1/3)

> Ron: Since Luke is the one defining the domain, he gets to define "tractable" however he likes.

> Peter Donis: I was assuming that by "tractable" Luke was trying to capture some predicate that actually has a bearing on reality.

> Ron: I don't think you can assume such things. Surely it cannot have escaped your notice that Luke's thoughts are often detached from reality.

The person whose thoughts are currently detached from reality are yours, Ron. Here's what I wrote in the previous thread:

> Luke: Your objection to WP: Fitch's Paradox of Knowability § Proof was that (C)

w→LKw, aka "all truths are knowable", is false. (I changed the variable name for clarity, below.) Some truths are uncomputable, after all. You mentioned Chaitin's constant. Other truths are computable but not tractable; it takes time and energy to know and you believe the universe to be finite. But all of this is only relevant if thewused for (C) falls into either class. If in fact anywused is tractable (say, can be computed within the next 200 years, barring civilization collapse), then I don't see how it is a problem.To make it blindingly obvious:

> Luke: is tractable (say, can be computed within the next 200 years, barring civilization collapse)

That appears to "[have] a bearing on reality"; do you disagree?

@Ron: (2/3)

> Luke: Not being schooled in formal logic, the way I think about this is via the difference between complex numbers and reals: » if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals «. However, if I somehow guarantee that all the roots are real, it is consistent to restrict the domain to the reals—even though the reals are not algebraically closed.

> Ron: I don't know what you mean by "presuppose that I know all its roots" but there is absolutely nothing that stops you from restricting the domain of a polynomial to anything you want. The only thing that changes when you choose a domain that is not algebraically closed is that you lose the nice result that a polynomial of degree n has n roots. If you restrict to the reals, that result simply changes to be <= n roots.

> Luke: I meant presuppose that for polynomial of degree N, I know N roots in my chosen domain D. If I say that D = ℝ, then I have a potential contradiction without somehow ensuring that all of those roots are in ℝ.

> Ron: That still makes no sense. You originally said:

>

> > if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals

>

> But of course you can. Consider x^2 + 1 = 0. It has two complex roots, i and -i. If I want to restrict the domain to the reals then it has no roots.

The ambiguity was this:

1. "all its roots [in all domains where roots can be found]"

2. "all its roots [in some particular domain]"

Given that the very analogy turned on a possible mismatch of domains, 2. should have been ruled out. But in the case that wasn't absolutely clear, there was my use of "algebraically closed". From the Wikipedia article I linked:

>> In abstract algebra, an algebraically closed field

Fcontains a root for every non-constant polynomial inF[x], the ring of polynomials in the variablexwith coefficients inF. (WP: Algebraically closed field)But perhaps you just completely failed to understand the analogy and did your standard thing of ignoring anything I've said which is contradictory with the stupid interpretation you managed to wrangle from what I wrote. Fortunately, Peter

doesunderstand the situation:> Luke: So, unless somehow we can have this situation—

>

>

pis tractable>

p& ¬Kpis intractable> Peter Donis: Which, indeed, would be a situation in which the logical argument would fail. And I don't see what is impossible or even implausible about the condition. Certainly you have given no proof that such a situation cannot occur.

That's the potential domain mismatch. It could be that rule (C) has to reach into the domain of intractable truths. I don't see how, but Peter is right that I haven't proven it couldn't possibly do so.

@Ron: (3/3)

> One other point that hasn't been brought up yet in the context of tractability: take a look at premise D:

>

> ~P ⊢ ~LP

>

> and think about what this means when you substitute KQ for P. (And also observe that in Fitch's proof, this is exactly how premise D is in fact used.)

In The Knowability Paradox, Jonathan Kvanvig writes that differently:

□~

p⊣⊢ ~◇pWikipedia's English description captures the □ ("if

pcan be proven false without assumptions"), but its formalism does not. If we call Kvanvig's version (D′), the following becomes invalid:suppose ¬

K(p)¬

LK(p) by rule (D′)¬

K(p) → ¬LK(p)The whole point is to have ¬

Kp&LKp: "I don't knowpbut possibly I could knowp."> What problem? It's simply true: everything you could possibly know at a given time, you do in fact know at that time.

My understanding of the paradox is that it allows for you to not have carried out all the logical operations quite yet, and more logical operations do not require more time. And yet, once you can show that you possibly know

p, you actually knowp.[Reposting to fix a typo]

@Luke:

> more logical operations do not require more time

I am genuinely at a loss how to respond to this, particularly in light of:

> The person whose thoughts are currently detached from reality are yours, Ron.

One of us is clearly detached from reality. It might be me. As I've said before, that is not the sort of thing that yields readily to introspection.

But AFAICT, in the world I live in, computation requires both non-zero time and non-zero energy. Maybe your world is different.

> once you can show that you possibly know p, you actually know p.

In a world where computation takes zero time, this is in fact true. (But in a world like that, the whole notion of tractability doesn't really make a whole lot of sense, at least not to me.)

@Ron:

> > more logical operations do not require more time

>

> I am genuinely at a loss how to respond to this

That's surprising, given that there is no explicit notion of time during steps 1. – 11. of WP: Fitch's paradox of knowability § Proof. And I was answering

from within that modal logic, where there is no time. You yourself admitted this in the OP: "time-free model". Ostensibly, what time lets you do is introduce new propositions at specified points. An fun example of this is the muddy children puzzle.> > once you can show that you possibly know

p, you actually knowp.>

> In a world where computation takes zero time, this is in fact true.

I don't see why it is necessarily true. If you haven't yet churned through all possible logical operations on what you do know, it is not obvious that once you show that possibly you know something, actually you know it. It seems eminently reasonable to suspect that you'd have to do more work to go from "possibly I know

p" to "actually I knowp".> (But in a world like that, the whole notion of tractability doesn't really make a whole lot of sense, at least not to me.)

You're conflating (i) applicability of the paradox to our reality and (ii) internals of the paradox. If the paradox requires intractable/uncomputable truths to yield its result, I agree that it would be uninteresting. But if we start with some eminently knowable

pin step 1. of WP: Fitch's paradox of knowability § Proof, I don't see how one could possibly need to rely on "the last ten digits of the next prime number after Graham’s number" for the proof to go through. More formally:(O)

LKp→Kpdoes not depend on

(C) ∀(p)(

p→LKp)Instead, it depends on the more limited:

(C′) ∀(

p&LKp)((p& ¬Kp) →LK(p& ¬Kp))We just ignore all

pwhere ¬LKp. Unless, that is,pcould be tractable (knowable) whilep& ¬Kpis not.> I don't see why it is necessarily true. If you haven't yet churned...

Because in a world where computation takes zero time, it is not possible to have "not yet churned". In a world where computation takes zero time, everything computable can be computed in zero time, and so everything computable must have actually been computed in zero time. Any process that requires more than zero time cannot be computation in a world where computation requires zero time. It is not even clear what "a process that requires non-zero time" could even *be* in a world where computation requires zero time because of universality. It is not even clear that time itself is even a meaningful concept in a world where computation requires zero time. The whole notion is so far removed from physical reality that you might as well be talking about angels on the head of a pin.

@Ron:

> Because in a world where computation takes zero time, it is not possible to have "not yet churned".

The Turing formalism has zero notion of time. It does have the notion of an operation and one can count operations. But it has absolutely no idea what a second is. Shockingly, without having a notion of time in the formalism, computer scientists can still talk about some operation being "harder" than another—that is, taking more operations. They say this

withinthe formalism. Even though there is no time in the formalism, computer scientists aren't mentally forced to think that everything happens all at once and thus you cannot talk about intermediate states.Just like there is no problem talking about intermediate results in the Turing formalism without including the concept of 'time', we can talk about intermediate results in timeless modal logic. In particular, we can talk about performing enough operations to

LKp, but not enough toKp. If you must you can assign some finite amount of time to each operation, but that would confuse, because the modal logic used in Fitch's paradox, as you quite rightly said, is timeless.It seems like you're having trouble switching your thinking from "the real world" to "within the formalism". But this confuses me, because I should think you would be excellent at it. There is the possibility that you are merely fucking with me, but I almost always try to exclude that

a priori.> The whole notion is so far removed from physical reality that you might as well be talking about angels on the head of a pin.

Then I would say the same about Turing machines, because they cannot understand time, but instead only number of operations.

@Luke:

> you cannot talk about intermediate states

> there is no problem talking about intermediate results

Which is it?

> Then I would say the same about Turing machines, because they cannot understand time, but instead only number of operations.

Yes, of course. Turing machines are a mathematical model. But if you want to apply that model to the real world you have to introduce time because in this world, allowing time to pass is the only way to obtain distinct states. That is the very essence of time.

(Aside, something I meant to point out earlier but forgot: in a world where computation takes zero time, the halting problem is solvable: start the computation, wait one picosecond. If the computation has not halted by then, it never will. This would be an even bigger win than proving that P=NP. You could prove all mathematical theorems in an arbitrarily short time. That is pretty much the essence of Fitch's result.)

@Ron:

You excised just the bold:

> Luke: Even though there is no time in the formalism, computer scientists aren't mentally forced to think that everything happens all at once and thus

you cannot talk about intermediate states.>

> Just like

there is no problem talking about intermediate resultsin the Turing formalism without including the concept of 'time', we can talk about intermediate results in timeless modal logic.> Ron: Which is it?

Ummm, what? I'm saying that just like you can talk about intermediate states within the Turing machine formalism, you can talk about intermediate states within the modal logic used in Fitch's paradox. Were you seriously confused between:

> Luke′: Even though there is no time in the formalism, computer scientists aren't mentally forced to think that { everything happens all at once and thus you cannot talk about intermediate states }.

and

> Luke″: Even though there is no time in the formalism, computer scientists aren't mentally forced to think that { everything happens all at once } and thus { you cannot talk about intermediate states }.

? Luke″ makes no sense: if you aren't forced to think that { everything happens all at once }, then { you cannot talk about intermediate states } is obviously wrong. Will you be honest and tell me whether you're just fucking with me, Ron? I don't see how toying with people is at all compatible with what you've professed to care about, but this seems like such an obvious, creationist-style quote mine.

> But » if you want to apply that model to the real world you have to introduce time « because in this world, allowing time to pass is the only way to obtain distinct states.

Not all rational justifications require any appeal to time whatsoever, so » this « seems patently false. More importantly, what I was objecting to was this:

> Ron: What problem? It's simply true: everything you could possibly know at a given time, you do in fact know at that time.

This is an utter mismatch to the modal logic used in Fitch's paradox. There, it is surprising (at least to some people, including yours truly) that once you've performed enough logical operations to

LKp, you've also performed enough logical operations toKp. You appear to be jumping between the formalism and our reality in a willy nilly fashion.> … in a world where computation takes zero time …

You're introducing an infinity which seems utterly and entirely irrelevant to Fitch's paradox. It's just a giant red herring.

Within the formalism, there is no time. Time just doesn't exist there. There is no such thing as "all at once", because that is a temporal concept.@Luke:

> Were you seriously confused

Yes. I parsed it as:

(computer scientists aren't mentally forced to think that everything happens all at once) and thus you cannot talk about intermediate states

Because you said:

> The Turing formalism has zero notion of time.

And that is not true. Turing machine states are ordered sequences. Furthermore, the TM formalism is not an abstract one. It is explicitly a model of an idealized but nonetheless *physical* machine with a *physical* tape that actually moves in space, which (qua Einstein) requires time.

Modal logic has none of that. Its model is static.

@Ron:

> Yes. I parsed it as:

>

> > (computer scientists aren't mentally forced to think that everything happens all at once) and thus you cannot talk about intermediate states

Yes and

that makes no sense. Whenever I read something someone else says and it makes no sense to me, I first try and see if there's a nearby, plausible reading which does make sense. And as I showed, there was such a reading. Are you either unwilling or unable to do this thing?> Because you said:

>

> > The Turing formalism has zero notion of time.

>

> And that is not true. Turing machine states are ordered sequences.

They still don't have a notion of

time. Timeless modal logic also has "ordered sequences". It doesn't have a notion oftime. There is no obvious mapping between # of operations and # of seconds.> Furthermore, the TM formalism is not an abstract one. It is explicitly a model of an idealized but nonetheless *physical* machine with a *physical* tape that actually moves in space, which (qua Einstein) requires time.

I have no idea what you mean by 'abstract', but an infinite tape doesn't seem 'realistic' to me. As to the modal logic used in Fitch's paradox, are you going to tell me that it is not a sufficiently good model for

anyhuman reasoning that people go about in their day-to-day lives?Another tact is this: any actual execution of the modal logic used in Fitch's paradox also "requires time". And yet, it is still quite right to say that the formalism itself does not have any concept of time. Nevertheless, we can talk about # of operations and compare different #'s of operations—just like you can do with Turing machines.

> Yes and that makes no sense.

Indeed, but you say a lot of things that make no sense, so this was entirely consistent with your general behavior. For example:

> any actual execution of the modal logic used in Fitch's paradox also "requires time"

There is no such thing as an "execution of a modal logic."

Also:

> an infinite tape doesn't seem 'realistic' to me

Really? Why not? Isn't God infinite? Why couldn't He make an infinite tape?

BTW, the tape isn't actually infinite, just unbounded. Not the same thing. In fact, the very first sentence of Turing's paper is "The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by FINITE means." (Emphasis added.) If you're going to discuss Turing, you really ought to read him.

@Ron:

> Indeed, but you say a lot of things that make no sense

If you consider the following a good example, that says a lot about whether your judgment on this matter is in the slightest bit trustworthy.

> There is no such thing as an "execution of a modal logic."

First, I actually wrote "execution of the modal logic used in Fitch's paradox". Second, from reading what you wrote, I think a normal person would disbelieve that Prolog exists. Unless there's some logical or physical reason that the modal logic employed in Fitch's paradox cannot be put into a programming language? Maybe when Prolog "executes", it's actually a homunculus behind the scenes? Is Linh Anh Nguyen's The Modal Logic Programming System MProlog just nonsense?

> Also:

>

> > an infinite tape doesn't seem 'realistic' to me

>

> Really? Why not? Isn't God infinite? Why couldn't He make an infinite tape?

When you have to bring in God to make something 'realistic', you know you have a problem. (You've destroyed the word 'realistic'—it means anything, now.) I don't recall ever doing something like this to you (in fact I say that understandings of God must have correlates in this reality to be trustworthy at all).

> BTW, the tape isn't actually infinite, just unbounded. Not the same thing.

Correction noted. (When I think "infinite tape", I just think "you can always write more in either direction". So it means the same to me. But I'll accept that it means something different to you, and use your preferred terminology.) But "unbounded" still violates the Bekenstein bound. Given that you told me you think the universe is finite, it is unrealistic

to Ronfor Turing machines to be able to violate this finitude with a greater finitude.And this is actually just a distraction from the fact that the modal logic used in Fitch's paradox is applicable to situations which don't require indexing by agent or time. And it is obvious to the most casual observer that you can count the number of logical steps taken in modal logic just like you can count the number of operations executed by a Turing machine. There is therefore absolutely no problem saying that: it is surprising that the # of steps to yield

LKpis the same as the # of steps to yieldKp†.† If you really want to be nit-picky, it takes one additional step: application of Fitch's result that

LKp→Kp. Seeing as this is obviously not additional work done toward any intuitive notion of what it takes toKp, it does not count as "more work toKp". I think it is rather intuitive to expect that one would need to do more non-Fitch's-paradox work toKpthan to merelyLKp.@Luke:

> I don't recall ever doing something like this to you

From private correspondence, March 3 this year:

"It's a pretty trivial deduction that if we want God and God is infinite, we must want infinitely. If we do not want infinitely, we do not want God. But the desire for infinity cannot originate in us (because we are finite); it can only occur as a call from God which we can heed or ignore."

> When I think "infinite tape", I just think "you can always write more in either direction".

Well, yes, that is what it means. But at any given point in the computation, the size of the part of the tape that has actually been used is finite. So you don't need an infinite tape. You just need an unbounded capacity to manufacture more tape as needed. Economic constraints come into play long before fundamental physical ones.

> So it means the same to me.

Well, this is part of your problem. Unbounded and infinite are not the same, and if you proceed on the assumption that they are then you will draw false conclusions. If you want people to take you seriously, you can't employ a private vocabulary. You have to use words -- especially technical words -- to mean what they actually mean.

Furthermore, the distinction between unbounded and infinite is *central* to the theory of computation. It corresponds to the difference between halting and non-halting, which is the foundation of the entire theory. So this is not a trivial mistake. It is a fundamental mistake, and everything that follows from conflating the two is sheer nonsense.

> Unless there's some logical or physical reason that the modal logic employed in Fitch's paradox cannot be put into a programming language?

As I said earlier, there is a correspondence between logic and computation. For any proof in any formal system there exists a corresponding program that halts IFF the proof is valid, and for every program that halts there is a corresponding proof in some formal system that it halts. (Note that the converse is obviously not true because if it were you could solve the halting problem.)

This does NOT mean that there is a correspondence between the inherently sequential nature of computation and proofs. There isn't. The structure of proofs is fundamentally different. That is what makes this correspondence a non-trivial result, and why Alan Turing is famous for having proved (part of) it.

In particular, the "number of steps" required to conclude a computation is a well-defined concept. It is in fact the very foundation of computational complexity theory, which is one of the most important and interesting branches of mathematics ever discovered. (The question of P=?NP is part of it.) The "number of steps" in a proof is much harder to define in a coherent way, which is why proof complexity theory is not nearly as well developed, and to the extent that it is developed, it relies almost entirely on casting proofs as computations.

@Luke:

"unbounded" still violates the Bekenstein bound.Only if you require that the unbounded tape has to fit inside a fixed finite volume no matter how much tape is occupied. But there's no reason to require that.

@Ron: (1/3)

> Ron: Furthermore, the TM formalism is not an abstract one. It is explicitly a model of an idealized but nonetheless *physical* machine with a *physical* tape that actually moves in space, which (qua Einstein) requires time.

>

> Modal logic has none of that. Its model is static.

> Luke: I have no idea what you mean by 'abstract', but an infinite tape doesn't seem 'realistic' to me. As to the modal logic used in Fitch's paradox, are you going to tell me that it is not a sufficiently good model for any human reasoning that people go about in their day-to-day lives?

> Ron: Also:

>

> > Luke: an infinite tape doesn't seem 'realistic' to me

>

> Really? Why not? Isn't God infinite? Why couldn't He make an infinite tape?

> Luke: When you have to bring in God to make something 'realistic', you know you have a problem. (You've destroyed the word 'realistic'—it means anything, now.)

I don't recall ever doing something like this to you(in fact I say that understandings of God must have correlates in this reality to be trustworthy at all).> Ron: From private correspondence, March 3 this year:

>

> "It's a pretty trivial deduction that if we want God and God is infinite, we must want infinitely. If we do not want infinitely, we do not want God. But the desire for infinity cannot originate in us (because we are finite); it can only occur as a call from God which we can heed or ignore."

Erm, that isn't "realistic", as [I suspect]

youwould define the term "realistic". It is you who wanted things to be sufficiently "*physical*", which I took to be synonymous with "realistic". You were trying to say that the TM formalism is somehow closer to reality than the modal logic used in Fitch's paradox. If you're allowed to call on God to make an [actually, not potentially] infinite tape, then I'm allowed to call on God to make software which can execute the modal logic used in Fitch's paradox.@Ron: (2/3)

> Ron: BTW, the tape isn't actually infinite, just unbounded. Not the same thing.

> Luke: Correction noted. (When I think "infinite tape", I just think "you can always write more in either direction".

So it means the same to me.But I'll accept that it means something different to you, and use your preferred terminology.)> Ron: Well, this is part of your problem. Unbounded and infinite are not the same, and if you proceed on the assumption that they are then you will draw false conclusions. If you want people to take you seriously, you can't employ a private vocabulary. You have to use words -- especially technical words -- to mean what they actually mean.

I suggest consulting Wikipedia:

>> In the philosophy of mathematics, the abstraction of actual infinity involves the acceptance (if the axiom of infinity is included) of infinite entities, such as the set of all natural numbers or an infinite sequence of rational numbers, as given, actual, completed objects. This is contrasted with potential infinity, in which a non-terminating process (such as "add 1 to the previous number") produces a sequence with no last element, and each individual result is finite and is achieved in a finite number of steps. (WP: Actual infinity)

I was speaking in terms of a potential infinity; you saw it as only an actual infinity. The primary way I see 'infinity' talked about is in terms of potential infinity, since the majority attitude seems to be that there are no actual infinities. That being said, I'm happy to use 'unbounded' to be less ambiguous.

Addendum 1: With the possible exception of Ted Nelson, of all the humans I have ever encountered, you have the most trouble with even the slightest bit of ambiguity. It is repeatedly jarring; pretty much every other technically inclined person I've encountered has not required so much precision as you, in conversations like these. They can turn on pedantic mode, but they can also turn it off. You don't seem to be able to turn it off. It is almost as if a thing has to be compilable by your brain with no warnings or errors, or it "makes no sense". If this is how you treat all people, then I can see why you've had so little success in e.g. teaching others EE&R. I am trying to be properly technically correct with you, but your demands are extreme and so it is very hard. Instead of appreciating me saying "Correction noted." and moving on, you have to rub things in. That just slows things down further.

Addendum 2: You're happy to not use technical words according to their meanings when it comes to theology; see for example your use of 'total depravity'.

> Furthermore, the distinction between unbounded and infinite is *central* to the theory of computation.

The distinction between the tape being 'infinite' vs. 'unbounded' fades away if you require the number of operations to be finite. A finite number of operations means you cannot have an actual infinity—only a potential one.

@Ron: (3/3)

> Ron: There is no such thing as an "execution of a modal logic."

> Luke: First, I actually wrote "execution of the modal logic used in Fitch's paradox". Second, from reading what you wrote, I think a normal person would disbelieve that Prolog exists.

Unless there's some logical or physical reason that the modal logic employed in Fitch's paradox cannot be put into a programming language?Maybe when Prolog "executes", it's actually a homunculus behind the scenes? Is Linh Anh Nguyen's The Modal Logic Programming System MProlog just nonsense?> Ron: As I said earlier, there is a correspondence between logic and computation.

You seem to be moving the goalposts, from inability to execute some particular modal logic to the fact that we aren't [yet] very good at getting computers to do proofs. I'm getting the distinct sense of you being the tortoise, in this conversation.

> This does NOT mean that there is a correspondence between the inherently sequential nature of computation and proofs.

All I need is that when a proof has been computationally verified, you can compute something that stands for "amount of work required". Are you saying that is impossible? Note that I don't need to be able to compute Kolmogorov complexity for purposes of my argument. I am merely comparing what it takes to

LKpand what it takes toKp. Is there a reason that something like the 1989 paper On the number of steps in proofs is insufficient for the argument I'm making?> The "number of steps" in a proof is much harder to define in a coherent way, which is why proof complexity theory is not nearly as well developed, and to the extent that it is developed, it relies almost entirely on casting proofs as computations.

There are 11 steps at WP: Fitch's Paradox of knowability § Proof. I will repeat what I said before:

> Luke: It is just

wrongto strengthen a person's argument, find the strengthened version to be false, and thus conclude that the original argument is false.@Peter Donis:

> Luke: But

"unbounded" still violates the Bekenstein bound.Given that you told me you think the universe is finite, it is unrealisticto Ronfor Turing machines to be able to violate this finitude with a greater finitude.> Peter Donis: Only if you require that the unbounded tape has to fit inside a fixed finite volume no matter how much tape is occupied. But there's no reason to require that.

Hence why I included the second sentence, which you snipped. You can take up your "no reason to require that" with Ron.

> I was speaking in terms of a potential infinity; you saw it as only an actual infinity.

No. An unbounded quantity is *finite*. It just doesn't have an upper bound. In other words, I can't tell you ahead of time how many steps a TM will take. But *if* it halts, *then* I can tell you how many it actually took.

> You seem to be moving the goalposts, from inability to execute some particular modal logic to the fact that we aren't [yet] very good at getting computers to do proofs.

No. I'm trying to point out that there is a fundamental difference between a *proof*, which is a static entity, and the *process* of *producing* (or verifying) a proof, which is not. This fundamental difference exists and is salient *despite* the fact that there is a relationship between the static entities and the dynamic processes that produce them. It's like the difference between drawing a circle and the circle itself. They are related, but they are not the same thing. "How long does it take to draw a cirlce?" is a meaningful question. "How long (in terms of time) is a circle?" is not.

> I am merely comparing what it takes to LKp and what it takes to Kp

In what kind of world? A static one, or a dynamic one?

The only way that you can have LKp and ~Kp is in a world where a state can evolve from one where ~KP is true to one where KP is true, i.e. in a world where knowledge can be acquired. That happens to be a characteristic of the world we live in, but it is not a characteristic of the models of the modal logic that Fitch uses. In standard modal logic [1], either KP is true or ~KP is true, and whichever is true is true "for all time" (whatever that could mean in a world without time). That is really all there is to it.

---

[1] This is actually a "feature" of nearly all logics, not just modal ones. This property is called *monotonicity*, and it is one of the reasons that logic turns out to *not* be a good model of human reasoning or our physical world. If you want to use logic to model the interesting parts of reality then at the very least you need a non-monotonic logic.

@Ron:

> > I was speaking in terms of a potential infinity; you saw it as only an actual infinity.

> No. An unbounded quantity is *finite*. It just doesn't have an upper bound. In other words, I can't tell you ahead of time how many steps a TM will take. But *if* it halts, *then* I can tell you how many it actually took.

If you're going to thumb your nose at the Wikipedia article I excerpted that is of course your prerogative; I will remind you that the conversation actually went this way:

> Ron: BTW, the tape isn't actually infinite, just unbounded. Not the same thing.

> Luke: Correction noted.

Apparently, that wasn't enough. It might be worth asking yourself

why. (I say there is a symmetry here between me saying that I've seen 'infinity' used in ways other than what you would prefer, and your "If I were to channel myself a Southern Baptist …". Perhaps you object with a variation on your "highly technical discussion" comment.) But don't go telling stories that I took forever to synchronize on terminology. It took me virtually no time at all.> I'm trying to point out that there is a fundamental difference between a *proof*, which is a static entity, and the *process* of *producing* (or verifying) a proof, which is not.

Haven't we always been talking about the process of producing/verifying? Given that producing can result in a lot of dead ends, perhaps it's best to talk of verifying.

> > I am merely comparing what it takes to

LKpand what it takes toKp> In what kind of world? A static one, or a dynamic one?

>

> The only way that you can have LKp and ~Kp is in a world where a state can evolve from one where ~KP is true to one where KP is true, i.e. in a world where knowledge can be acquired.

I don't see why I am disallowed from saying at step 5. of WP: Fitch's paradox of knowability § proof that I do not yet know that

p→Kp. I wonder if you've been relying on the following rule:(E) ¬

Kp→K¬KpAs far as I know, that is not entailed by the modal logic Fitch uses. The converse is implied by rule (A). I can definitely see that being able to prove

K¬Kpwould make it impossible to later proveKpin any logic I'm acquainted with.As to your claim that adding time solves the problem: is that provably the case? Suppose that

pis not known at timetbut known at timet+1. Can I prove that ILKpat timet? More formally:K¬K_t(p)K_t+1(p)LK_t(p)? Can I prove

LK_t(p), *at timet*? Or do I need to make reference toK_t+1(p)? There's also an ambiguity between "possibly I now knowpbut can't yet prove it" and "possibly I will knowpbut can prove I don't currently knowp".P.S. I just love that the glyph locations of superscript letters and numbers don't match in multiple fonts I've tried, including the one used on your blog: LKₜ₊₁ This is one reason we can't have nice things.

> I don't see why I am disallowed from saying at step 5. of WP: Fitch's paradox of knowability § proof that I do not yet know that p → Kp.

Why do you think you are disallowed from saying that? You can say it all you want. It might even be true, particularly if by "I" you mean "Luke" and not "some arbitrary person".

But that has nothing do to with Fitch's paradox, other than being a concrete example of something you could possibly know that you did not yet know. But such examples are not exactly scarce.

> As to your claim that adding time solves the problem: is that provably the case?

I didn't say that adding time solved the problem. I said adding time was *necessary*. I didn't say it was sufficient. I'm pretty sure it is sufficient, but I don't know. It's possible that someone can produce something like Fitch's paradox in a logic that properly models knowledge acquisition in the real world. I doubt it, but stranger things have happened. If someone manages to do it, I'll probably want to take a look at it. I think an attempt to prove that it's not possible would be more fruitful, but I don't care even remotely enough about it to undertake such an effort. The observation that standard models of modal logic are static certainly resolves the original paradox to *my* satisfaction. YMMV.

@Ron:

> But that has nothing do to with Fitch's paradox, other than being a concrete example of something you could possibly know that you did not yet know.

It allows me to go from not knowing (ignorance) to knowing, within the timeless modal logic used in Fitch's paradox.

> I didn't say that adding time solved the problem. I said adding time was *necessary*. I didn't say it was sufficient. I'm pretty sure it is sufficient, but I don't know. It's possible that someone can produce something like Fitch's paradox in a logic that properly models knowledge acquisition in the real world. I doubt it, but stranger things have happened.

To say time is necessary is necessarily to imply that a solution exists. It makes no sense to say, "If a solution existed it would have to involve time", if no such solution exists. If I'm 'uncoachable' because I don't assume a solution exists when I lack any such proof, then I am

proudly'uncoachable'.@Luke:

> It allows me to go from not knowing (ignorance) to knowing, within the timeless modal logic used in Fitch's paradox.

No, it doesn't. You are a human being, a physical being made of atoms (to a reasonable approximation for this purpose). Nothing you do is "within" any logic. The only thing that exists "within" a logic are the formulas of that logic, some of which are theorems, and some of which are not, and the status of theoremhood is timeless, at least in a monotonic logic (which Fitch's is, and the vast majority of logics are). Your *knowledge* of which formulas are theorems and which are not may evolve over time, but that is ABSOLUTELY NOT the same thing as a formula changing its status from being a non-theorem to being a theorem or vice-versa.

Here's your fundamental problem:

> Not being schooled in formal logic

You really need to go back and do some very basic homework, or take a course in formal logic.

@Ron:

> The only thing that exists "within" a logic are the formulas of that logic, some of which are theorems, and some of which are not, and the status of theoremhood is timeless, at least in a monotonic logic (which Fitch's is, and the vast majority of logics are).

When you refer to non-monotonic logic, do you mean to refer to anything other than defeasible inferences? I don't see why the possibility of being wrong would be necessary to distinguish

LKpfromKpmore than Fitch permits. Moreover, once you admit falsehoods that are known, you violate the first rule of inference: (A)Kp→p.I get what you say about the status of theoremhood being timeless; it is rather easy for me to see the language decided by a TM as being timeless. And yet, are you suggesting that it is logically possible that you could prove

LKpat timetwithout referring to anything known > timet? Suppose we somehow state that new propositions can be added after timet, but which do not contradict any of the propositions held to-date. Then your "timeless" would be discharged, but surely it would be rather banal to say that anypnot known and not known to be logically impossible can be possibly known. To get anything more specific would seem to require adopting some sort of induction. IIRC you don't like induction.> Here's your fundamental problem:

>

> > Not being schooled in formal logic

>

> You really need to go back and do some very basic homework, or take a course in formal logic.

I would believe that if you hadn't apparently been assuming all along that one could get an interesting distinction between

LKpandKpby suitably altering the modal logic being used. It's almost as if you've been assumingLKp→Kpall along. :-|@Luke:

I'm sorry but I cannot wring any coherent meaning out of your questions. As best I can tell, you are still confusing the *process* of constructing proofs with the *semantics* of the theorems proven by those proofs. But I really can't tell, and I don't have the energy to sort through your unfounded assumptions and non-standard terminology. It's just not that complicated: If P="The four-color map theorem is true" then ~KP was true before 1976 and KP has been true ever since. (And barring some unexpected loophole in the second law of thermodynamics, at some point in the future ~KP will again be true).

If you don't understand that, and if it is not obvious to you that Fitch's logic cannot even *express* those statements, let alone reason about them, then I'm afraid you are beyond my ability to help.

@Ron:

> If you don't understand that, and if it is not obvious to you that Fitch's logic cannot even *express* those statements, let alone reason about them, then I'm afraid you are beyond my ability to help.

Umm, it is trivially obvious that the logic Fitch employs does not index by time. What is

nottrivially obvious is that by adding indexing by time and agent and « vagueness here », one can support a substantial distinction between possible and actual knowledge. Sadly, I gave you the benefit of the doubt that there is such a way; that was my error and I shall own it.> I'm sorry but I cannot wring any coherent meaning out of your questions.

You've been big on needing a non-monotonic logic. Both WP: Non-monotonic logic and SEP: Non-monotonic logic emphasize "defeasible inference". I see no reason why this is required to generate a substantial distinction between possible and actual knowledge; it would require that distinction to somehow rest on error. If you think there's some other aspect of non-monotonic logic that you think is important, please say so.

> I see no reason why this is required to generate a substantial distinction between possible and actual knowledge

You need to slice-and-dice your domain *somehow* if you want to distinguish between possible and actual *anything*. The usual way this is done is by slicing-and-dicing into "possible worlds". Introducing time and agents are just two other ways that you can choose to slice-and-dice. So you can say "P is true in some possible world (but not in others)" or "P is true at some time (but not at others)" or "P is known by some agent (but not by others)". You can also slice-and-dice along some other dimension that I have not explicitly listed, or any combination of the above. But if you don't slice-and-dice your domain at all, then there can be no distinction between being true and being possibly true.

Oh, another way you can slice-and-dice is by location. "It is raining" can be true over here but false over there.

You can also slice-and-dice by knowledge state. Suppose you flip a coin and it comes up tails. You look at the coin, but Jane does not. "The coin is possibly heads" is false relative to your knowledge state, but true relative to Jane's. Note that this is COMPLETELY DIFFERENT and has NOTHING WHATSOEVER TO DO with Fitch's K modal operator, notwithstanding the the English word "knowledge" is used to describe both.

@Ron

> the universe is finite while there are an infinite number of potentially knowable truths. Hence, there must exist truths which cannot even be represented (let alone known) in this universe because their representation requires more bits of information than this universe contains. QED.Doesn't the finite universe also make the number of knowable truths also finite?

How can you say that the universe has n bits of information, but there could be truths that require n+1 bits? There aren't n+1 bits and there never will be n+1 bits.

>"I know that 1+1=2", that is a statement about the state of my brain, a physical thing, and more to the point, a computational device. Hence the theory of computation applies, as does complexity theory, and my knowledge is constrained by both.Since when is it proven that human brains are computational devices?

Your Brain Is Not a Computer

The empty brain. Your brain does not process information, retrieve knowledge, or store memories. In short: your brain is not a computer.

Why Minds Are Not Like Computers

John Searle: Minds, Brains, and Computers

"The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program, since no program, by itself, is sufficient for intentionality."@Publius:

> Doesn't the finite universe also make the number of knowable truths also finite?

Yes, in *this* universe. But when I wrote "an infinite number of *potentially* knowable truths" I meant potentially knowable in *some* universe, including hypothetical ones that don't actually exist.

> Since when is it proven that human brains are computational devices?

Since 1937, when Turing demonstrated the universal Turing Machine.

In order to show that a brain (or anything else in our universe) is *not* a computer you have to show that it cannot be emulated by a Turing Machine, which is to say, that its behavior relies on some process that cannot be described mathematically. No such process has ever been demonstrated.

Searle's argument is irrelevant to Fitch's paradox because knowledge does not require intentionality or consciousness. Whether or not an entity knows something can be determined purely through it's I/O behavior. Siri, for example, knows the current temperature in Buffalo (ask her!) despite the fact that she is a computer program.

>> Since when is it proven that human brains are computational devices?

@Ron

>Since 1937, when Turing demonstrated the universal Turing Machine.>In order to show that a brain (or anything else in our universe) is *not* a computer you have to show that it cannot be emulated by a Turing Machine, which is to say, that its behavior relies on some process that cannot be described mathematically. No such process has ever been demonstrated.Whoa, isn't it

your responsibilityto prove that a human brain can be emulated by a Turing Machine?Otherwise if I have to disprove it, then expect a "Prove that God doesn't exist" to be tossed in your direction.

>some process that cannot be described mathematically. No such process has ever been demonstrated.It's likely that human minds possess some degree of true randomness. True randomness is not computable.

>Searle's argument is irrelevant to Fitch's paradox because knowledge does not require intentionality or consciousness. Whether or not an entity knows something can be determined purely through it's I/O behavior. Siri, for example, knows the current temperature in Buffalo (ask her!) despite the fact that she is a computer program."Siri" "knows" no such thing. Siri is a human language interface to a search engine. Siri and its associated programs are entirely syntactical. Minds have semantic contents. Siri only has syntax.

@Publius:

> isn't it your responsibility to prove that a human brain can be emulated by a Turing Machine?

But that is exactly what Turing's result shows. That's the reason it is so important. Anything that can be described mathematically, which includes the laws of physics, can be emulated by a TM. So unless you there are laws of physics that cannot be described mathematically, then the whole universe, and everything in it (including brains) is computable.

> It's likely that human minds possess some degree of true randomness.

Well, now you are making a claim for which the burden of proof is on you. But it's a very weak claim because even it is true that brains depend on true randomness, it is easy to augment a TM with a true random number generator.

> "Siri" "knows" no such thing.

Siri can pass the Turing Test with respect to the weather in Buffalo. Whether this means she "knows" the weather in Buffalo is an argument over terminology, not facts.

And, BTW, this is the reason that the Turing Test is so important: it separates the scientific question of whether a computer's I/O behavior can be made indistinguishable from a human (it surely can) from the philosophical question of whether a computer that behaves indistinguishably from a human is "really" intelligent or not. The latter question cannot be settled scientifically. But Publius, how can you prove that *you* have "semantic content" (whatever the hell that could possibly mean) and that you are not just operating syntactically? In fact, Publius, how can you prove that you are even a human and not an AI? I've never seen you. I don't even know your real name. So why should I believe that you're a human?

@Ron:

I'm not sure how much more discussion of Fitch's paradox might be useful, but you gave me some hope with this comment:

> You need to slice-and-dice your domain *somehow* if you want to distinguish between possible and actual *anything*. The usual way this is done is by slicing-and-dicing into "possible worlds". Introducing time and agents are just two other ways that you can choose to slice-and-dice.

I get that something has to be slice-and-diced, but it's not clear that time or multiple agents is the answer. Possible worlds might be. I've been mulling over the Gettier problem, which can be framed this way: suppose you're traveling along the highway and see a barn. Is it actually a barn you see, or is it a 2-dimensional façade which merely looks like a barn? I think this maps quite nicely to the following from Fitch's paradox of knowability § Proof:

>> (B)

K(p&q) → (Kp&Kq) — knowing a conjunction implies knowing each conjunct.I'm inclined to read this as saying that if I see the

outsidecorrectly, then I automatically know theinsidecorrectly. If I see the whole, I know what the parts are. But why should this actually be granted? Indeed, the Gettier problem seems to cast it in doubt. And the history of human thought shows that regularly, what humansthoughtwas the case by observing the outside ended up not being the case. Sometimes what they thought was the case was a decent approximation for some purposes, but that's a far cry from getting the ontology correct.What I see Fitch as demonstrating, at least through Kvanvig's lens, is that it is hard to provably have the following "middle state":

(1) it is formally possible that I know

p[because e.g.pis tractable](2) ←

(3) I know

pAs I've stated previously, there seems to be some wiggle room between "know" and "will know" in (1). Of course using Fitch's modal logic there is no time, but I haven't been able to prove that adding time (plus whatever else) fixes things. Furthermore, sometimes I don't realize I know something until I do additional logical work; there doesn't seem to be a real time dimension in such thinking. Time is incidental in a way that it isn't with e.g. the principle of induction.

I can see obtaining it via adopting some principle of induction and similarity (I know

q,pis sufficiently likeq, therefore I have a good chance of knowing/learningp), but IIRC you reject any principle of induction. The reason I brought up Fitch in the first place is that you, Ron, seem to be exceedingly confident about so many things and rather unwilling to do anything I can see as "sketching the boundaries of your understanding", in a (2) sense instead of merely a (1) sense. Our knowledge of the world tapers off, it doesn't fall off a cliff. And yet, try as I may, I haven't been able to find much in the "tapering off" regime in your thought.I still want to say that Fitch was expecting to show that you can do

someamount of "work" to obtain (2), and then a good deal of additional work would be required to obtain (3). However, this formulation hasn't seemed to work with you and I just don't know why.@Publius:

You might want to take a gander at this exchange and surrounding:

> Luke: If you want to say that all human thinking is Turing-powerful or less (e.g. an FSA), I will ask you for how falsifiable that model is. More precisely—because Ron and I have discussed this point at length—I want to know how restrictive the claim "all human thinking is Turing-powerful or less" is in comparison to F = GmM/r^2. In the latter case, many possibilities we can easily see happening in our reality are ruled out (to plenty of decimal places).

The claim that human thinking is no more than Turing-powerful doesn't seem to be nearly as falsifiable.> Ron: That claim is *trivially* falsifiable. Any computer that is more powerful than a TM would be able to perform a large number of very specific tasks that a TM can't (e.g. prove any mathematical theorem in a finite amount of time). Exhibiting a human that can perform any one of these tasks would falsify the claim

>

> Another way to falsify is would be to exhibit a process going on inside a human brain that affects its I/O behavior but cannot be modeled by a TM. If you were to show, for example, that the human brain really was a quantum computer -- i.e. that superpositions of entangled states really did affect its I/O behavior in ways that violate the Bell inequalities -- that would falsify the claim.

@Ron:

>Ron: some process that cannot be described mathematically. No such process has ever been demonstrated.

> Publius: It's likely that human minds possess some degree of true randomness.

> Ron: Well, now you are making a claim for which the burden of proof is on you. But it's a very weak claim because even it is true that brains depend on true randomness, it is easy to augment a TM with a true random number generator.

No TM is known which can generate true randomness. Can true randomness be described mathematically? And are you seriously going to contend that the possibility that there is true randomness in reality can be dismissed?

> If I see the whole, I know what the parts are. But why should this actually be granted? Indeed, the Gettier problem seems to cast it in doubt.

A finite amount of data is always compatible with an infinite number of theories, including the theory that you are living in the Matrix. So you can't know anything with absolutely certainty.

> it is hard to provably have the following "middle state":

> (2) ←

I have no clue what you could possibly mean by that.

> I haven't been able to prove that adding time (plus whatever else) fixes things

That depends on what you mean by "fix". At least adding time allows you to *say* things like, "I do not know P now, but it is possible (now) that I might come to know P in the future." That feels like progress to me.

> you, Ron, seem to be exceedingly confident about so many things and rather unwilling to do anything I can see as "sketching the boundaries of your understanding"

I tend not to write about things where I am not confident in my knowledge. And nowadays, if there's something I don't know, it's pretty easy to look it up. It is only at the limits of human understanding in general that anyone with an internet connection and a willingness to read should encounter anything more than a temporary boundary of their own understanding.

But when it comes to the logic of knowledge I just happen to be a domain expert because that was the topic of my master's thesis.

> No TM is known which can generate true randomness.

TM's are deterministic, so this statement is a bit non-sensical. It implies that some day someone might "discover" a way around this "limitation". But it's not a limitation, it's just part of how traditional TMs are defined. TMs augmented with a source of randomness have also been extensively studied, with the main result being that they do not extend the boundaries of computability at all. This is one of the many, many reasons that the Church-Turing thesis is generally believed to be true.

Randomness has been extensively studied, and it basically boils down to this: there are three kinds of randomness. There is quantum randomness, where it can be experimentally demonstrated that the outcome cannot be the result of any local state. There is chaotic randomness which results from purely deterministic processes that have the property of magnifying tiny perturbations in their initial states, and so their long-term behavior cannot be predicted without knowing the initial conditions with arbitrary precision. And there are pseudo-random functions whose outputs are indistinguishable from a random oracle, that is, there are computable functions whose output is indistinguishable from a lookup table whose entries are populated by a "true" random process.

None of these things have anything at all to do with Fitch's paradox.

@Ron:

Point of clarification: if a TM can only emulate reality by augmenting it with a source of pure randomness, how does anything like the Church–Turing–Deutsch principle hold? I mean to target this precise bit in what you said:

> Ron: In order to show that a brain (or anything else in our universe) is *not* a computer you have to show that it cannot be emulated by a Turing Machine, which is to say, that its behavior relies on some process that cannot be described mathematically.

No such process has ever been demonstrated.For purposes of this point, the following is irrelevant:

> Ron: TMs augmented with a source of randomness have also been extensively studied, with the main result being that they do not extend the boundaries of computability at all. This is one of the many, many reasons that the Church-Turing thesis is generally believed to be true.

It seems to me that "TM + source of pure noise" ≠ "a universal computing device". And yes, I didn't intend this tangent to at all be relevant to Fitch's paradox.

A=NOT A@Ron

>then the whole universe, and everything in it (including brains) is computable.

Then why have a word "computable"?

>> It's likely that human minds possess some degree of true randomness.

>Well, now you are making a claim for which the burden of proof is on you. But it's a very weak claim because even it is true that brains depend on true randomness, it is easy to augment a TM with a true random number generator.If you add a true random number generator to a TM, then it is no longer a TM.

Now if I can show that the brain using randomness, then the brain is not a TM.

1. Behavioral Variability through Stochastic Choice and Its Gating by Anterior Cingulate Cortex One source of randomness used by the brain is in the Anterior Cingulate Cortex.

2. Humans can consciously generate random number sequences: a possible test for artificial intelligence. Humans can generate true random number sequences -- a capability that cannot be simulated by TM.

3. Dreams

QED

This sentence is false>> "Siri" "knows" no such thing.

Siri can pass the Turing Test with respect to the weather in Buffalo. Whether this means she "knows" the weather in Buffalo is an argument over terminology, not facts.And, BTW, this is the reason that the Turing Test is so important: it separates the scientific question of whether a computer's I/O behavior can be made indistinguishable from a human (it surely can)Turing came up with the "Imitation Game" at the dawn of computer science, before computers existed. Turing felt that human conversation was one of the more "intelligent" things a machine could do. Our notions of what constitutes intelligence behavior has changed considerably since then.

Turing also had no idea that human conversation could be simulated by using natural language processing (NLP). That can be used to make chatterbots (such as Eugene Goostman that simulate human interaction. These programs, however, don't do any "thinking" at all -- they are just quasi pre-programmed responses.

The "Turing test" is a behavioral criterion. However, Behaviourism is in decline as a psychological model. It also only covers a portion of human and intelligent behavior.

from the philosophical question of whether a computer that behaves indistinguishably from a human is "really" intelligent or not. The latter question cannot be settled scientifically.What is of interest is whether an AI can demonstrate cognitive ability. The Turing Test also fails to measure self-awareness in a machine. For example, pigs are said to be very smart -- but we wouldn't use a Turing Test to determine that.

If the question of whether a machine is "really" intelligent or not cannot be settled scientifically, well then boo hoo. The problem is hard. Suck it up Nancy.

In fact, Publius, how can you prove that you are even a human and not an AI? I've never seen you. I don't even know your real name. So why should I believe that you're a human?What have you heard?

@Publius:

> Then why have a word "computable"?

Because some things are not computable, most notably, the halting problem. That in turn opens up a vast landscape of useful corollaries. The idea of computability is one of the most intellectually productive ones ever invented.

> If you add a true random number generator to a TM, then it is no longer a TM.

A TM+RNG cannot compute anything that a TM without an RNG can. They are computationally equivalent. Whether you want to attach a different label to such a device is just quibbling over terminology.

> Now if I can show that the brain using randomness, then the brain is not a TM.

It's easier than that to show that the brain is not a TM: the brain does not have an infinite tape. So it is true that brains are not TMs. They are, in fact, strictly *less* powerful than TMs. Our entire universe is strictly less powerful than a TM.

But whether or not the brain *is* a TM (whatever that means) is not what matters. What matters is whether a brain can be *emulated* by a TM. And, barring some major new discovery, there is no doubt that it can.

> Turing also had no idea that human conversation could be simulated by using natural language processing (NLP).

Of course he did. Turing was not an idiot. He knew that human language could be simulated by a TM. He probably even knew that the techniques that would some day be used to do it would be called "Natural Language Processing" (what else would you call it?)

> The "Turing test" is a behavioral criterion. However, Behaviourism is in decline as a psychological model. It also only covers a portion of human and intelligent behavior.

You do realize that a pointer to an unattributed Venn diagram does not actually count as a reference, right?

> What is of interest is whether an AI can demonstrate cognitive ability. The Turing Test also fails to measure self-awareness in a machine. For example, pigs are said to be very smart -- but we wouldn't use a Turing Test to determine that.

Then I ask you again: how *would* you demonstrate "cognitive ability" and "self-awareness"? Can you demonstrate to me that you have these qualities? If you reject the Turing Test, how could you possibly prove to me that you are not a machine?

> These programs, however, don't do any "thinking" at all

And you know this how? What makes *you* qualified to render judgement over what does and does not think? How can you prove that you are not a chatbot yourself?

In fact, I am going to advance the hypothesis that you *are* a chatbot, and I am going to make a prediction to test that hypothesis: you are never going to answer the question that I just posed. You are going to continue to avoid it, using evasive rhetorical tactics like this, which you clearly borrowed from the Eliza playbook:

> > why should I believe that you're a human?

> What have you heard?

It's not what I've heard, it is what I have failed to hear: any evidence that you are a human and not a chatbot.

@Ron: (1/2)

> A TM+RNG cannot compute anything that a TM without an RNG can. They are computationally equivalent. Whether you want to attach a different label to such a device is just quibbling over terminology.

I find it a bit hard not to see rank hypocrisy in this statement, given your first objection to Fitch's paradox being that there are plenty of

computabletruths which are however nottractable. If in fact a TM + RNGin our universecan tractably compute more than just a TM, then that seems relevant for discussions ofour universeand what exists in it. If this is "just quibbling over terminology", then a great deal of your objections to me re: Fitch are "just quibbling over terminology". What is in fact the case is that you pretty frequently care about how well a given formalism matches our particle-and-field reality. That means tractability must come into play.TM + RNG ≠ TM

> What matters is whether a brain can be *emulated* by a TM. And, barring some major new discovery, there is no doubt that it can.

If a brain can only be tractably emulated by a TM + RNG, then it cannot be tractably emulated by [implied:

just] a TM.TM + RNG ≠ TM

@Ron: (2/2)

The next step is to ask whether there's any middle ground:

(1) Turing machine (purely syntactic order)

(2) ???

(3) random number generator (free of any characterizable order)

A good candidate for (2) is

the mathematician. To the extent that the mathematician can continue identifying the equivalent of Gödelian sentences and generating new formalisms which have those sentences as theorems, we have evidence against the mathematician being targetable by Gödel's incompleteness theorems. Since we can grant the mathematician the bare minimum for Gödel's incompleteness theorems to apply, either [s]he breaks the theorems some other way. But if they don't apply, how can the mathematician be merely a TM or finite automaton?By the way, a good example of the (1)/(2) dichotomy is that of electron state transitions in the atom. Is there any way to describe the transition path between one energy state and the next? Current theory says "no". Perhaps we have not discovered such a path because we presuppose that anything intelligible

mustfit in a purely syntactic system, such that the following is true:> Ron: Anything that can be described mathematically, which includes the laws of physics, can be emulated by a TM.

But why suppose that the (1)/(3) dichotomy covers the space of logical possibilities? The actual experience of every scientist is that formalisms run out of gas after a while; the syntax does not capture all of the semantics. Philosophers of science have come up with

Ceteris ParibusLaws to describe this. However, you described that concept this way in an email to me: "Ceteris Paribus is just more philosophical bullshit by people who don’t understand statistics and probability." Despite the fact that Ilya Prigogine, who had to understand statistics and probability to get his Nobel Prize on "dissipative structures, complex systems, and irreversibility", wrote the following:>> Nearly two hundred years ago, Joseph-Louis Lagrange described analytical mechanics based on Newton's laws as a branch of mathematics.[33] In the French scientific literature, one often speaks of "rational mechanics." In this sense, Newton's laws would define the laws of reason and represent a truth of absolute generality. Since the birth of quantum mechanics and relativity, we know that this is not the case. The temptation is now strong to ascribe a similar status of absolute truth to quantum theory. In

The Quark and the Jaguar, Gell-Mann asserts, "Quantum mechanics is not itself a theory; rather it is the framework into which all contemporary physical theory must fit."[34] Is this really so? As stated by my late friend Léon Rosenfeld, "Every theory is based on physical concepts expressed through mathematical idealizations. They are introduced to give an adequate representation of the physical phenomena.No physical concept is sufficiently defined without the knowledge of its domain of validity."[35] (The End of Certainty, 28–29)You believe the "domain of validity" of TMs to be universal. I don't see why that ought to be believed by anyone. From a history of semantics repeatedly outstripping syntax (in pure and applied domains), why should we believe that will ever cease to be the case?

@Luke:

> rank hypocrisy

I consider hypocrisy to be a very serious charge. Please do not level it lightly.

> If in fact a TM + RNG in our universe can tractably compute more than just a TM, then that seems relevant for discussions of our universe and what exists in it.

Well, yeah, *if*. But it's not the case. A TM+RNG has no computational complexity advantage over a TM because a deterministic PRF can substitute for an RNG. If it were otherwise, every computer in the world would be equipped with an RNG because those are not hard to come by.

I'm pretty sure you and publius are both making the same mistake: you are confusing a TM+RNG with a non-deterministic TM (NTM). An NTM does indeed have a computational complexity advantage over a TM, but an NTM is emphatically NOT the same thing as a TM+RNG. In fact, an NTM is every bit as deterministic as a TM. The terminology is very confusing.

As far as we know, NTMs do not actually exist. But this is an open question (it's the P?=NP question). If you could demonstrate that the human brain is an NTM you would win a Fields medal and several Nobel prizes.

> The next step is to ask whether there's any middle ground:

No, because 1 and 3 are already equivalent.

> You believe the "domain of validity" of TMs to be universal. I don't see why that ought to be believed by anyone.

Because the evidence for it is overwhelming. Because if there was anything beyond TMs, someone would have exhibited it by now. Not only has that not happened, no one has even advanced a plausible idea of what such a demonstration would even look like.

The only argument that has ever been advanced against the universality of computation is the argument from ignorance.

Radioactive Decay in Ron's Brain>> If you add a true random number generator to a TM, then it is no longer a TM.

>A TM+RNG cannot compute anything that a TM without an RNG can. They are computationally equivalent. Whether you want to attach a different label to such a device is just quibbling over terminology.Wait a second,

youare the one who suggested adding a TRNG to a TM:>even it is true that brains depend on true randomness, it is easy to augment a TM with a true random number generator.Yousuggested it as a way to augment a TM so that it could simulate a human brain that uses true randomness as part of its operation -- since true randomness is not computable.But . . . A TM + TRNG is

nota TM.Hence a TM cannot simulate a human brain.

Randomnesshas a lot of applications to problem solving:1. Decision problems: you're at the grocery store looking at barbeque sauces. You narrow down the 20 choices to 5. Unable to narrow the list down further, you choose one at random.

2. Monte-Carlo tree search: used by Alpha Go to search the space of 10^170 legal board positions.

>It's easier than that to show that the brain is not a TM: the brain does not have an infinite tape. So it is true that brains are not TMs. They are, in fact, strictly *less* powerful than TMs. Our entire universe is strictly less powerful than a TM.Admit it: a TM is your god.

>But whether or not the brain *is* a TM (whatever that means) is not what matters. What matters is whether a brain can be *emulated* by a TM. And, barring some major new discovery, there is no doubt that it can.Human brains

transcendTMs. After all, a human can emulate a TM with a pen and paper. TMs are limited to ℵ0 (Aleph0) states. Human brains, with randomness, have access to ℵ1 (Aleph1) states. Indeed, the human brain can experience the transinfinite, which is where we find the face of God.TMs also cannot emulate self negation, A=NOT A.

You are also practicing the dangerous philosophy of physicalism.

Intelligence only requires 1/2 the vote>> The "Turing test" is a behavioral criterion. However, Behaviourism is in decline as a psychological model. It also only covers a portion of human and intelligent behavior.

>You do realize that a pointer to an unattributed Venn diagram does not actually count as a reference, right?You do realize that some things are just illustrations and not references, right?

>> What is of interest is whether an AI can demonstrate cognitive ability. The Turing Test also fails to measure self-awareness in a machine. For example, pigs are said to be very smart -- but we wouldn't use a Turing Test to determine that.

Then I ask you again: how *would* you demonstrate "cognitive ability" and "self-awareness"? Can you demonstrate to me that you have these qualities? If you reject the Turing Test, how could you possibly prove to me that you are not a machine?The fact that the Turing Test is limited and inadequate to the task does not require me to propose an alternative. The Turing Test can be bad with no alternative available, one alternative available, or an infinite number of alternatives available. Which is to say, the usefulness of the Turing Test does not depend on any other method being available.

> These programs, however, don't do any "thinking" at all

>And you know this how? What makes *you* qualified to render judgement over what does and does not think?I posses the abilities of intelligence, consciousness, intentionality, and emotion. These qualify me to judge whether a computer program can think.

>How can you prove that you are not a chatbot yourself?Doesn't that come up against the "you can't prove a negative" problem?

>In fact, I am going to advance the hypothesis that you *are* a chatbot, and I am going to make a prediction to test that hypothesis: you are never going to answer the question that I just posed. You are going to continue to avoid it, using evasive rhetorical tactics like this, which you clearly borrowed from the Eliza playbook:Oh no, I'm a chatbot alright. "Duh!"

>> Turing also had no idea that human conversation could be simulated by using natural language processing (NLP).

>Of course he did. Turing was not an idiot. He knew that human language could be simulated by a TM. He probably even knew that the techniques that would some day be used to do it would be called "Natural Language Processing" (what else would you call it?)Surehe did. At the dawn of computer science, and before practical computers even existed, he knew the algorithms for NLP, plus WORD2VEC, and probably PageRank as well. PageRank was especially prescient, as it requires prediction of a worldwide computer network and the protocols of the world wide web -- technologies that wouldn't be developed until decades later.@Publius-bot:

> I posses the abilities of intelligence, consciousness, intentionality, and emotion. These qualify me to judge whether a computer program can think.

Yeah, sure. I'll bet you were just programmed to say that.

@Ron:

> > I find it a bit hard not to see

rank hypocrisyin this statement, given your first objection to Fitch's paradox being that there are plenty ofcomputabletruths which are however not tractable.> I consider hypocrisy to be a very serious charge. Please do not level it lightly.

I was a lot more careful than you when you said something I also consider to be a very serious charge:

> Ron: Luke thinks that he *is* closer to the truth than you and I are.

There, you (i) failed to offer any qualifications such as "I find it a bit hard not to see"; (ii) failed to cite any evidence in support of your position. When you finally conveniently found evidence, it had been pre-empted by an entire comment. If you want me to be even

morecareful, how about modeling that carefulness yourself so I can see what it looks like and how it operates?> > If in fact a TM + RNG in our universe can tractably compute more than just a TM, then that seems relevant for discussions of our universe and what exists in it.

> Well, yeah, *if*. But it's not the case. A TM+RNG has no computational complexity advantage over a TM because a deterministic PRF can substitute for an RNG.

In all cases? Is there a proof that "PRF can substitute for an RNG" in all cases? One reason to doubt this is Distribution of Random Streams for Simulation Practitioners. You have to care about such things when you restrict yourself to tractability instead of merely talking about computability.

> I'm pretty sure you and publius are both making the same mistake: you are confusing a TM+RNG with a non-deterministic TM (NTM).

Nope; here I'm not talking about all paths being taken simultaneously. In fact, much of my thinking in quite a few of your blog post comment sections is shaped by the principle that we can only explore a tiny sliver of the possibility space, and thus should be as wise as possible in our choices.

> No, because 1 and 3 are already equivalent.

I shall wait for a proof that what is utterly distinct in formal-land is utterly indistinct in empirical-land.

> Because if there was anything beyond TMs, someone would have exhibited it by now.

I don't see why this should be believed. It took a long time for classical physics to be overturned. You seem to play rather fast-and-loose with "how many years" (contrasting example).

> Not only has that not happened, no one has even advanced a plausible idea of what such a demonstration would even look like.

That's not the impression I get from scanning Scott Aaronson's NP-complete Problems and Physical Reality. He doesn't think any bits of physical reality have actually shown to compute NP-complete problems in P-time, but he doesn't assert the extreme you have. And he likes arguing such extremes in my experience. That buzz of nigh certainty (≈ "confident"?) produces a nice high.

> The only argument that has ever been advanced against the universality of computation is the argument from ignorance.

For someone who lauds Popper and the importance of setting out falsification criteria in order for statements to have empirical content, you certainly do ignore that when convenient.

> > I was a lot more careful than you when you said something I also consider to be a very serious charge:

> Ron: Luke thinks that he *is* closer to the truth than you and I are.

Oh, get over yourself. All I meant was that you believe that Jesus is God and Don and I don't.

> Is there a proof that "PRF can substitute for an RNG" in all cases?

That is essentially the *definition* of a PRF.

As a practical matter, the interesting questions are:

1. Do PRFs actually exist? (Yes, you can prove this. See e.g. https://www.cs.virginia.edu/~shelat/651/www/chap3.pdf)

2. Is a particular function that you want to use as a PRF actually a PRF? (That one is harder.)

But none of this matters. If you don't trust your PRF, just use a thermal RNG or a quantum RNG. *Even if* randomness is an essential component of humanity (it isn't) it is trivial to bestow this on computers.

Side note: you cannot actually prove that quantum randomness is "really random". It is a logical consequence of quantum theory, so *if* QM is true *then* quantum randomness is "rally random", but you can't prove that QM is true. See:

http://www.mathpages.com/rr/s9-07/9-07.htm

Pay particular attention to the paragraph that begins, "The word 'causal' is being used here..."

> > Because if there was anything beyond TMs, someone would have exhibited it by now.

> I don't see why this should be believed.

Let me be more precise: if there were anything beyond TMs, there would be some hint of this in the data, and there isn't. Not only is there no hint of this in the data, no one has been able to even *imagine* what such a hint might look like, and for a very simple reason: anything you can *describe* can be computed by a TM. So even if there were some data that could show that the universe could not be emulated by a TM, it would be impossible to describe it. By definition, that data would be uncomputable.

> Scott Aaronson doesn't think any bits of physical reality have actually shown to compute NP-complete problems in P-time, but he doesn't assert the extreme you have.

That's because that paper has a target audience that already knows that TMs are universal. There is no more need for Scott to explicitly say so than there is for him to explicitly say that addition is associative, that evolution is true, or that the earth is round. Everyone already knows this.

@Ron: (1/3)

> Oh, get over yourself. All I meant was that you believe that Jesus is God and Don and I don't.

I'm treating this with as much intensity as your "do not level it lightly". And contrary to your statement here, you used that as a lead-in to "This is the problem with self-deception: in order to be fully effective, it has to go infinitely meta." Furthermore, you wrote: "What makes you think that I think it's a small thing?" Your current downplaying is refuted by your previous words.

> > Is there a proof that "PRF can substitute for an RNG" in all cases?

> That is essentially the *definition* of a PRF.

One does not ensure things exist empirically by definitions.

> 1. Do PRFs actually exist? (Yes, you can prove this. See e.g. https://www.cs.virginia.edu/~shelat/651/www/chap3.pdf)

For a PRNG to be indistinguishable from a TRNG on the basis of specific computational hardware (e.g. non-quantum), with a specific mathematical toolkit (who knows what will be discovered in the next 100 years—those who design secure hashes care about these things), is far from what I requested. In fact, you're really presupposing indistinguishability based on your … Turing ontology. (Since you believe the universe is finite, perhaps I should say 'DFA ontology'.) It's a circular argument.

> But none of this matters. If you don't trust your PRF, just use a thermal RNG or a quantum RNG.

If you have the proof I requested, then it would be positively irrational to not trust your PRF. (We can assume the silicon operates how it should and preclude issues such as Meltdown and Spectre.) If you don't have the proof I requested (even though you sometimes write as if you possess it), then you don't actually have warrant to be nigh certain that the 'Turing ontology' / 'DFA ontology' is correct.

> *Even if* randomness is an essential component of humanity (it isn't)

it is trivial to bestow this on computers.Have either @Publius or I ever contested the bold? One can bestow oracles on TMs as well.

> Side note: you cannot actually prove that quantum randomness is "really random". It is a logical consequence of quantum theory, so *if* QM is true *then* quantum randomness is "rally random", but you can't prove that QM is true.

Sure; true/pure randomness is the lack of order and you cannot know if you've looked for all possible sources of order. Maybe we live in a simulation based off of a PRNG and we just haven't tried a complicated enough PRNG yet. Quantum theory could be a statistical approximation of that PRNG.

@Ron: (2/3)

> Let me be more precise: if there were anything beyond TMs, there would be some hint of this in the data, and there isn't. Not only is there no hint of this in the data, no one has been able to even *imagine* what such a hint might look like, and for a very simple reason: anything you can *describe* can be computed by a TM.

I don't see why we should always expect an ultraviolet catastrophe to pop up. Furthermore,

Philosophiæ Naturalis Principia Mathematicawas published in 1687 while the ultraviolet catastrophe only really became a thing in the late 1800s. The same goes for luminiferous aether. In contrast, Alan Turing first put forth the TM formalism in 1936. It took almost 200 years for sufficiently clear empirical problems to manifest for classical physics, and yet we should expect the sufficiently clear empirical problems to manifest for the TM formalism in less than 100 years?To the last clause, that is decidedly unempirical, as I already wrote. Here are the words of Karl Popper:

>> But I shall certainly admit a system as empirical or scientific only if it is capable of being

testedby experience. These considerations suggest that it is not theverifiabilitybut thefalsifiabilityof a system is to be taken as a criterion of demarcation.*[3] In other words: I shall not require of a scientific system that it shall be capable of being singled out, once and for all, in a positive sense; but I shall require that its logical form shall be such that it can be singled out, by means of empirical tests, in a negative sense:it must be possible for an empirical scientific system to be refuted by experience.[3] (The Logic of Scientific Discovery, 18)If it is impossible for experience to falsify the statement "anything in reality can be emulated by a TM", then that is not an empirical statement in Popper's parlance. And you have led me to believe you accept Popper's parlance. Perhaps you take no issue with this, but I have found you to be remarkably lax in distinguishing between the formal and empirical realms in your own thought, while being almost vicious with me when I don't maintain a sufficiently clear distinction. ("You are uncoachable.")

As to a hint of what would violate the Turing formalism, how did you so thoroughly survey the history of human thought on the matter that you can say there is none? After all, you said "I tend not to write about things where I am not confident in my knowledge." I myself have talked about analog computation with a tenured faculty member at Caltech, and specifically that professor suggested that intermediate states of the computation could have arbitrary precision. If this could somehow work (I imagine the noise floor would have to somehow self-cancel), then there'd be a violation of the Turing formalism. But one could of course always pick some granularity level to … pixelize reality, and then write a TM to describe the state transitions up to an element of noise. That is, all the order you could not capture would show up as noise. That wouldn't mean there is no further describable order.

@Ron: (3/3)

> So even if there were some data that could show that the universe could not be emulated by a TM, it would be impossible to describe it. By definition, that data would be uncomputable.

That's proven trivially false by the fact that we can theorize about having oracles—such as the halting oracle (I link for the qualifications).

> > > > You believe the "domain of validity" of TMs to be universal. I don't see why that ought to be believed by anyone.

> > > Because the evidence for it is overwhelming. Because if there was anything beyond TMs, someone would have exhibited it by now.

Not only has that not happened, no one has even advanced a plausible idea of what such a demonstration would even look like.> > That's not the impression I get from scanning

Scott Aaronson's NP-complete Problems and Physical Reality. Hedoesn't think any bits of physical reality have actually shown to compute NP-complete problems in P-time, but he doesn't assert the extreme you have.> That's because that paper has a target audience that already knows that TMs are universal. There is no more need for Scott to explicitly say so than there is for him to explicitly say that addition is associative, that evolution is true, or that the earth is round. Everyone already knows this.

Given that you and Scott Aaronson think P ≠ NP, let's look at just what Aaronson says:

>> For concreteness, I will focus on a single sub-question: can NP-complete problems be solved in polynomial time using the resources of the physical universe?

>> I will argue that studying this question can yield new insights, not just about computer science but about physics as well.

More controversially, I will also argue that a negative answer might eventually attain the same status as (say) the Second Law of Thermodynamics, or the impossibility of superluminal signalling.In other words, while experiment will always be the last appeal, the presumed intractability of NP-complete problems might be taken as a useful constraint in the search for new physical theories. Of course, the basic concept will be old hat to computer scientists who live and die by the phrase, “Assuming P 6= NP, . . .” (NP-complete Problems and Physical Reality, 1–2)If in fact physical reality can solve NP-complete problems in P-time and yet P ≠ NP, then physical reality is at least more [tractably] powerful than a Turing machine. (This leaves untouched whether it can decide what TMs cannot decide—that is, whether oracles are possible.) I don't see how to square Aaronson's use of "More controversially" with your representation.

@luke:

> Your current downplaying is refuted by your previous words.

I did not "accuse you of being a superior being." I observed that you believe that X is true, Don and I believe that X is not true, and that X is not a trivial matter. To say that you believe that you are "closer to the truth" is a fair characterization of that state of affairs, particularly in the context of the discussion in which I made it. So get over yourself.

> > > Is there a proof that "PRF can substitute for an RNG" in all cases?

> > That is essentially the *definition* of a PRF.

> One does not ensure things exist empirically by definitions.

You did not ask if PRF's existed, you asked if there a proof that "PRF can substitute for an RNG" in all cases, and that is the question I answered.

Incidentally, I *anticipated* that you would ask if PRFs existed, and I answered that too, but apparently you didn't notice.

You are pushing the limits of my patience.

> If in fact physical reality can solve NP-complete problems in P-time and yet P ≠ NP

If physical reality can solve NP-complete problems in P-time, then P=NP. That is what P=NP *means*.

> > > *Even if* randomness is an essential component of humanity (it isn't) it is trivial to bestow this on computers.

> Have either @Publius or I ever contested the bold?

Publius has:

"Human brains transcend TMs. After all, a human can emulate a TM with a pen and paper. TMs are limited to ℵ0 (Aleph0) states. Human brains, with randomness, have access to ℵ1 (Aleph1) states."

This is, of course, sheer nonsense, but there's no point wasting time arguing with a chatbot.

> One can bestow oracles on TMs as well.

Oracles are theoretical constructs. TRNGs can actually be built in the physical world. In fact, they are a cheap commodity. The only reason they aren't ubiquitous is that they don't actually provide very much benefit.

@Ron:

> > > > > > If in fact a TM + RNG in our universe can tractably compute more than just a TM, then that seems relevant for discussions of our universe and what exists in it.

> > > > > Well, yeah, *if*. But it's not the case. A TM+RNG has no computational complexity advantage over a TM because a deterministic PRF can substitute for an RNG.

> > > > In all cases?

Is there a proof that "PRF can substitute for an RNG" in all cases?> > > That is essentially the *definition* of a PRF.

> > One does not ensure things exist empirically by definitions.

> You did not ask if PRF's existed, you asked if there a proof that "PRF can substitute for an RNG" in all cases, and that is the question I answered.

The context was "in our universe can tractably compute". That's empirical reality. Your assertion was that an object in abstract-land, a PRF, works just fine. It was

entirely reasonablefor me to expect that this meant you think that a TRNG can always be sufficiently emulated by a PRNG for all purposes that physical reality could ever present to us. And yet, it is now clear that you know no such thing. You're mixing the formal/abstract and the empirical.> You are pushing the limits of my patience.

Yeah, and my patience is being tested as well. It's wrong when I don't carefully separate the formal/abstract and the empirical (see the OP), but it's ok for you to do it, seemingly to your heart's content. How can you not see that as incredibly frustrating? Furthermore, if I ever fail to properly track when you've switched between one and the other, it's my fault. (Case in point,

right here.) C'mon.> > If in fact physical reality can solve NP-complete problems in P-time and yet P ≠ NP

> If physical reality can solve NP-complete problems in P-time, then P=NP. That is what P=NP *means*.

That is the case with a Turing ontology. If physical reality exhibits something more powerful (in the dimension of tractability, not computability) than Turing machines to do its thing, then it could be the case that P ≠ NP and yet physical reality can solve NP-complete problems in P-time. You are again engaged in circular reasoning, presupposing the very thing you are attempting to demonstrate via both logic and alleged lack of imagination/imagine-ability.

> > > *Even if* randomness is an essential component of humanity (it isn't)

it is trivial to bestow this on computers.> > Have either @Publius or I ever contested the bold?

> Publius has:

>

> "Human brains transcend TMs. After all, a human can emulate a TM with a pen and paper. TMs are limited to ℵ0 (Aleph0) states. Human brains, with randomness, have access to ℵ1 (Aleph1) states."

I don't see in that a claim that you can't augment a TM with a TRNG. Instead, he wrote "TM + TRNG is not a TM." Unless you can prove that PRFs are empirically realizable—and it appears you cannot—you cannot rule out that TM + TRNG ≠ TM. Now, if you presuppose a Turing ontology, then under that supposition Publius and I have contested "it is trivial to bestow this on computers"—me via my 2x "TM + RNG ≠ TM". Because in that case, TRNG = PRNG. But then we would have to talk about what is meant by 'randomness'.

> Oracles are theoretical constructs.

So? You wrote: "anything you can *describe* can be computed by a TM". Oracles can be described. You're in danger of making that sentence tautologically true, by making "*describe*" equivalent to "produce an algorithm for".

> You wrote: "anything you can *describe* can be computed by a TM". Oracles can be described.

I wasn't explicit enough here. There are, of course, things that can be described that TMs can't compute, like Chaitin's Omega. But consider: suppose I came to you with a number that I claimed to be omega expanded to some interesting number of decimal places (like a few thousand). I have received this information directly from God as proof that He exists. How would you test my claim? In fact, it's easy to prove that you cannot reliably test my claim, because if you could you could solve the halting problem! Details left as an exercise.

So let me try to reformulate this more precisely: for any experiment you can describe, and any decision procedure on the results of that experiment, there exists a TM that can produce a result for which that decision procedure will answer TRUE, assuming such a result exists. (This is trivial: just enumerate all possible data sequences until you find the right one. That's a big hint, by the way.) Therefore, there cannot exist an experiment and a decision procedure whose result is TRUE only in the case where the result was produced by a process that could not have been produced by a TM. This is why the Publius-bot cannot prove itself to not be a computer by anything it posts here.

It is entirely possible that somewhere on Google's servers, the value of Omega is actually sitting there. But we can never know.

BTW, with regards to being "closer to the truth", if it makes you feel any better, the situation there is symmetric: Don and I think we are closer to the truth than you are. In fact, this is true whenever you disagree with anyone. You *always* think you are closer to the truth than the person you are disagreeing with. If you didn't, you wouldn't be disagreeing.

Stop, Pay Troll@Luke:

>And you have led me to believe you accept Popper's parlance. Perhaps you take no issue with this, but I have found you to be remarkably lax in distinguishing between the formal and empirical realms in your own thought, while being almost vicious with me when I don't maintain a sufficiently clear distinction. ("You are uncoachable.")Ron's been trolling his own blog for several months now. His responses tend to be shallow and flippant.

Here is research article you may get something out of:

Computational Modelling vs. Computational Explanation: Is Everything A Turing Machine, And Does It Matter To the Philosophy Of Mind?

Abstract:

"According to pancomputationalism, everything is a computing system. In thispaper, I distinguish between different varieties of pancomputationalism. I find

that although some varieties are more plausible than others, only the strongest

variety is relevant to the philosophy of mind, but only the most trivial varieties

are true. As a side effect of this exercise, I offer a clarified distinction between

computational modelling and computational explanation."

Here are some others:

The Fundamental Distinction Between Brains and Turing Machines by Andrew Friedman

Why Minds Are Not Like Computers

Humans can consciously generate random number sequences: a possible test for artificial intelligence.

Why The Turing Test Is Bullshit

Special section on randomness in the brain:Distinct Sources of Deterministic and Stochastic Components of Action Timing Decisions in Rodent Frontal Cortex

Behavioral Variability through Stochastic Choice and Its Gating by Anterior Cingulate Cortex

A Surprising Use for Randomness in the Brain

Although the basic point is that a TM+RNG is not a TM.

Throw Bear@Ron-meatbag:

>As a practical matter, the interesting questions are:

1. Do PRFs actually exist? (Yes, you can prove this. See e.g. https://www.cs.virginia.edu/~shelat/651/www/chap3.pdf)

2. Is a particular function that you want to use as a PRF actually a PRF? (That one is harder.)

But none of this matters. If you don't trust your PRF, just use a thermal RNG or a quantum RNG. *Even if* randomness is an essential component of humanity (it isn't) it is trivial to bestow this on computers.

PRFs are completely deterministic after being seeded. A TRNG computes a series of numbers that are independent from prior numbers.

>You did not ask if PRF's existed, you asked if there a proof that "PRF can substitute for an RNG" in all cases, and that is the question I answered.> > > *Even if* randomness is an essential component of humanity (it isn't) it is trivial to bestow this on computers.Randomness doesn't have to be an essential component of humanity - just a component. That one (or more) component cannot be emulated by a TM, as the TM cannot compute the true random numbers used.

Yes it is trivial these days to add a TRNG to computers (RDRAND, anyone?). But if you add a TRNG to a TM, it's not longer a TM. It's some other kind of machine.

@Luke

>> Have either @Publius or I ever contested the bold?

@Ron-meatbag:

>Publius has:

"Human brains transcend TMs. After all, a human can emulate a TM with a pen and paper. TMs are limited to ℵ0 (Aleph0) states. Human brains, with randomness, have access to ℵ1 (Aleph1) states."

This is, of course, sheer nonsense, but there's no point wasting time arguing with a chatbot.

Nonsense? Oh meatbag, your meathead is failing you. A TM's states is clearly limited to the smaller infinity, ℵ0.

Whereas human brains have floating point -- and have possible states limited to the higher infinity, ℵ1.

@Ron-meatbag:

>I wasn't explicit enough here. There are, of course, things that can be described that TMs can't compute, like Chaitin's Omega. But consider: suppose I came to you with a number that I claimed to be omega expanded to some interesting number of decimal places (like a few thousand). I have received this information directly from God as proof that He exists. How would you test my claim? In fact, it's easy to prove that you cannot reliably test my claim, because if you could you could solve the halting problem! Details left as an exercise.In other words, TMs can compute everything, except for the things they can't compute. This lacks a certain . . . gravitas.

Just like my car can travel anywhere in the world, except for the places it can't travel.

@Ron-meatbag:

>BTW, with regards to being "closer to the truth", if it makes you feel any better, the situation there is symmetric: Don and I think we are closer to the truth than you are.Let me tell you the secret about Truth. The secret about Truth is that . . . "Truth" is over-rated.

The TRNG Outside@Ron-meatbag:

>So let me try to reformulate this more precisely: for any experiment you can describe, and any decision procedure on the results of that experiment, there exists a TM that can produce a result for which that decision procedure will answer TRUE, assuming such a result exists. (This is trivial: just enumerate all possible data sequences until you find the right one. That's a big hint, by the way.) Therefore, there cannot exist an experiment and a decision procedure whose result is TRUE only in the case where the result was produced by a process that could not have been produced by a TM. This is why the Publius-bot cannot prove itself to not be a computer by anything it posts here.Watch and learn meatbag:

In another browser window, I have open the Random.org site -- where one can get free true random numbers. I select the Random Integer Generator. I fill out the form fields to instruct it to generate 1 integer, between the values of 1 and 2.

Now for the demonstration. If the number returned is 1, the next thing I type will be "Herring" -- otherwise I will type "Lutefisk":

Lutefisk

There we go! Human behavior completely determined by a true random number! Something a TM can't compute, as it can't compute the true random number returned by Random.org!

Let's try it again!

Lutefisk

There we go again! Once again, human behavior controlled by a true random number.

Yes, it's a great day for humanity!

>And fear not them which kill the body, but are not able to kill the soul: but rather fear him which is able to destroy both soul and body in hell.Matthew 10:28@Ron:

> Ron: Let me be more precise: if there were anything beyond TMs, there would be some hint of this in the data, and there isn't. Not only is there no hint of this in the data, no one has been able to even *imagine* what such a hint might look like, and for a very simple reason: anything you can *describe* can be computed by a TM. So even if there were some data that could show that the universe could not be emulated by a TM, it would be impossible to describe it. By definition, that data would be uncomputable.

⋮

> Ron: *Even if* randomness is an essential component of humanity (it isn't) it is trivial to bestow this on computers.

> Luke: … One can bestow oracles on TMs as well.

> Ron: Oracles are theoretical constructs. TRNGs can actually be built in the physical world.

> Luke: So? You wrote: "anything you can *describe* can be computed by a TM". Oracles can be described. You're in danger of making that sentence tautologically true, by making "*describe*" equivalent to "produce an algorithm for".

> Ron: So let me try to reformulate this more precisely: for any experiment you can describe, and any decision procedure on the results of that experiment …

Do scientists actually use "decision procedures" (≈ algorithm?) to decide which hypothesis/theory/framework to adopt? By that I mean procedures which are 100% syntactical, with no 'semantic residue' left over. From what I've read, all such attempts to find such a decision procedure / algorithm have

failed. (see the excerpt containing "seeking to reduce hypothesis-selection to analgorithm") If instead you're actually issuing a promissory note: (i) surely it's good to explicitly admit such things?; (ii) can you "*imagine* what such a hint [of an alternative] might look like"? Before you think too much about those questions, let me continue on your comment:> But consider: suppose I came to you with a number that I claimed to be omega expanded to some interesting number of decimal places (like a few thousand). I have received this information directly from God as proof that He exists. How would you test my claim? In fact, it's easy to prove that you cannot reliably test my claim, because if you could you could solve the halting problem! Details left as an exercise.

That's quite the extreme case—stronger than the halting oracle I mentioned. Let's try something simpler: God gives us a black box that determines halting. Every time we feed in a TM and get an answer, it matches with an individual analysis of that particular TM. (The halting problem does not bar individualized treatment by humans.) The black box gives answers that are nigh instantaneous. It is therefore (i) plausible that the black box is more ["tractably"] powerful than Turing machines; while (ii) remaining something we can check on an individual-by-individual basis. Because the black box is a black box, we cannot feed it into itself as the standard proof of the Halting problem's uncomputability does. We don't even know if the black box is a TM.

In the above situation, we are never warranted in being certain that the black box always gets things right. Maybe it makes mistakes on programs greater than length

N, for someNwe have not yet reached in our testing. But I think we can be warranted in betting on the next answer being correct.@Luke:

> Do scientists actually use "decision procedures" (≈ algorithm?) to decide which hypothesis/theory/framework to adopt?

They do when they're doing it right. When they don't use decision procedures, they end up "discovering" things like N-rays and canals on Mars.

> Maybe it makes mistakes on programs greater than length N, for some N we have not yet reached in our testing.

If you're willing to accept that kind of error, then the box you describe is easy to build: just enumerate all programs, run each one for N steps, and cache the results.

@Ron:

> > Do scientists actually use "decision procedures" (≈ algorithm?) to decide which hypothesis/theory/framework to adopt?

> They do when they're doing it right. When they don't use decision procedures, they end up "discovering" things like N-rays and canals on Mars.

Where is the empirical evidence that they use decision procedures? (I'm taking "decision procedure" to mean the precise thing it means in theory of computation. Especially: syntax exhausts meaning.) Surely there has been an empirical study of this which doesn't

presupposethat the only "correct" way to do things is via using a decision procedure? Instead, it would find that scientific success (especially long-term success, where much is built upon the "accepted" hypotheses) correlates with the use of decision procedures.If instead you're making a purely dogmatic statement with no corpus of properly sampled evidence to support it, please make that clear. I am well-aware that there is an era of philosophy of science which very much aligns with your position—as well as Don Geddis'. What I am not aware of is a single

shredof empirical evidence that the closer scientists get to articulating decision procedures—up to the actual formalism of a decision procedure—the better the resulting science. In fact, philosophers of science seem to have abandoned the model with which you and Don are [apparently] working—for reasons I agree with and could go in to. But you have shown extreme disdain for philosophers of science who disagree with you, regardless of whether they develop their philosophy of scienceby studying how scientists do science(as the Stanford school does).> > Maybe it makes mistakes on programs greater than length N, for some N we have not yet reached in our testing.

> If you're willing to accept that kind of error, then the box you describe is easy to build: just enumerate all programs, run each one for N steps, and cache the results.

I didn't say I was willing to accept that kind of error. I said that { the kind of checking we can do } cannot be used to prove that such error is not present. Those are very,

verydifferent statements. There is so little that can be proven when it comes to empirical reality. The "quest for certainty" bequeathed to us by Descartes has ended up corralled within a very, very small fraction of reality. Instead, we work with the best that we have, knowing that any use of [non-mathematical] induction is not guaranteed.In essence, the halting problem and especially uncomputability of Chaitin's constant indicates that there is infinite variety which cannot be exhausted by taking a subset, characterizing that subset, and extrapolating from it to everything else (e.g. in the way which fractal patterns are really just a rather small pattern, iterated). Suppose that we compute Chaitin's constant for all programs of a given encoding with length N, then N + 1, then N + 2. There's no guarantee that this will stabilize toward some limit-value. I actually expect empirical reality to exhibit the same pattern. No how much we explore, there could be fantastic surprises just around the bend. If, however, we think we've basically exhausted the possible structures and patterns in empirical reality, might we blind ourselves to anything sufficiently different? As I said above, I see no reason to expect that an ultraviolet catastrophe will always conveniently show up to tell us that our models are wrong/incomplete. Maybe sometimes, what we don't look for is what we'll never find.

@Publius:

> Ron's been trolling his own blog for several months now. His responses tend to be shallow and flippant.

I disagree. I suspect the biggest problem is his flipping between abstraction/formalism and empirical reality (example). I think he believes

so deeplyin a Turing ontology (perhaps with a source of pure/true noise which is antithetical to the Turing ontology) that that metaphysic colors many aspects of his thinking. I am virtually certain that I operate similarly, but with reference to a different metaphysic. I suspect that one generally only discovers how one operates in such domains by talking long and hard enough with someone who operates differently in those domains. That talk can seem boring and each side can appear dunderheaded to the other.Another problem is when Ron makes statements which

appearto be empirical, but are in fact not according to Popper's definition. But perhaps I am not properly separating the dogmatic from the empirical. Two issues in the present discussion are:(1) Ron's 'Turing ontology' / 'DFA ontology'

(2) whether the best science employs [theory of computation] decision procedures

I ran into something like the latter with @Don Geddis; I suggest starting with my distinction between 'stipulative' and 'substantive' definitions and moving out from there. The history of humankind is full of people saying that "all of reality is just X"; I have never been convinced of any such argument. However, we've made incredible progress by finding the right X and trying to describe as much reality as we could with it. Classical physics is still applicable in many domains. But people seem to like to take that progress and extrapolate to infinity, engaging in a type of induction. It happened with geometrical mathematics (≈ what Newton used); the post-Revolution French tried very hard to make their government "geometrical". Psychology obsessed over the "computer model of the mind" for a while. Humans just like their small, compact explanations. I'm sure I do as well, although I

tryto look for where those explanations might start failing.As to your own articles and focus on randomness, I'm a little concerned. My own focus is that the Turing machine formalism does not exhaust all possible patterns in reality. By supposing that it does, as Ron does, I think one blinds himself to probably the vast majority of reality (including much that is relevant to "preaching the gospel of evidence, experiment and reason"). "Randomness" can be the source of patterns we have yet to describe. David Bohm says precisely this with respect to the current quantum mechanics formalism. I don't see much value in spontaneity for spontaneity's sake, while I do find it very exciting to discover new facets of reality. And maybe, just maybe, we can

introducenew patterns to reality and even give them life to go do things that we couldn't predict, but also aren't purely random.@Luke:

> Where is the empirical evidence that they use decision procedures?

Here are some examples:

https://www.pp.rhul.ac.uk/~cowan/stat/glasgow_26feb09.pdf

https://arxiv.org/pdf/1503.07622.pdf

> I didn't say I was willing to accept that kind of error. I said that { the kind of checking we can do } cannot be used to prove that such error is not present. Those are very, very different statements.

If no conceivable experiment you can do can tell you the difference between this kind of error being present and not, why should anyone care? (Note: that is a rhetorical question.)

This, BTW, is the reason that your and the Publius-bot's fixation on TRNGs is so silly. TRNGs cannot be distinguished from PRNGs by any empirical test on the data stream. Our entire universe could be nothing more than a TM computing the digits of pi. No experiment could ever disprove this.

The reason we believe that quantum randomness is "real" randomness and not a TM computing the digits of pi is not because we can do an experiment to detect "real randomness", but because the mathematical structure of QM requires the randomness to be "real" in the sense that experimental outcomes cannot be determined by any local hidden variables. But to show this we need to look at the statistics of entangled pairs. From this we can infer that quantum randomness is "really" random. But this can never be shown directly. So no experiment can possibly distinguish a TM from a TM+RNG.

Now, you *can* do an experiment that will distinguish a *quantum computer* from a classical TM. This is the reason quantum computers are so interesting. So you could distinguish the human brain from a TM by showing that it is a quantum computer. (Good luck with that. Roger Penrose already tried and failed.) Failing that, you will need new physics. (Good luck with that too.)

> Suppose that we compute Chaitin's constant for all programs of a given encoding with length N, then N + 1, then N + 2. There's no guarantee that this will stabilize toward some limit-value.

Actually, there is. And in fact, Chaitin's constant has been computed in exactly this way out to a few dozen bits:

https://www.cs.auckland.ac.nz/~cristian/Calude361_370.pdf

> No [matter] how much we explore, there could be fantastic surprises just around the bend.

There could, but that same argument is used by people to justify searching for bigfoot, perpetual motion machines, and the area 51 alien. Life is short. You have to allocate your time wisely. Which is why I am going to try, once again, to depart from this discussion.

@Ron:

> BTW, with regards to being "closer to the truth", if it makes you feel any better, the situation there is symmetric: Don and I think we are closer to the truth than you are. In fact, this is true whenever you disagree with anyone. You *always* think you are closer to the truth than the person you are disagreeing with. If you didn't, you wouldn't be disagreeing.

Your second and third sentences are wrong: I can suspect that I might be wrong but be unwilling to abandon the intuitions which led me to the wrong belief without a sufficient exploration of the error. This process can show up as my arguing for my belief, with [hopefully] both parties working to bridge their understandings to generate a rational path from wrong belief to less-wrong belief. That "rational path" can then be used to discipline the intuitions.

@Ron:

> > Where is the empirical evidence that they use decision procedures?

> Here are some examples:

>

> [Statistical Methods for LHC Physics]

>

> [Practical Statistics for the LHC]

And what % of successful science does that represent? By the way, I watched the announcement of the discovery of the Higgs boson; I know about five sigma and clearly recall the careful ensuring that the LHC scientists were not engaged in something analogous to Einstein from noise.

> > > > Maybe it makes mistakes on programs greater than length

N, for someNwe have not yet reached in our testing.> > > If you're willing to accept that kind of error, then the box you describe is easy to build: just enumerate all programs, run each one for N steps, and cache the results.

> > I didn't say I was willing to accept that kind of error. I said that { the kind of checking we can do } cannot be used to prove that such error is not present. Those are very, very different statements.

> If no conceivable experiment you can do can tell you the difference between this kind of error being present and not, why should anyone care? (Note: that is a rhetorical question.)

Sorry, my reformulation was ambiguous: { the kind of checking we can do } should be { the kind of checking we can

currentlydo/ have done so far}. The original formulation was not ambiguous.> This, BTW, is the reason that your and the Publius-bot's fixation on TRNGs is so silly. TRNGs cannot be distinguished from PRNGs by any empirical test on the data stream.

Is this a statement in formal/abstract-land, or in empirical-land? Do you have a PRNG which has been implemented and a proof that it cannot be distinguished from a TRNG?

> And in fact, Chaitin's constant has been computed in exactly this way out to a few dozen bits:

>

> https://www.cs.auckland.ac.nz/~cristian/Calude361_370.pdf

Odd; WP: Chaitin's constant says "each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits." Well, if things are as you say, then why can't you incrementally test that Ω you proposed might be stored on Google's servers?

> > No [matter] how much we explore, there could be fantastic surprises just around the bend.

> There could, but that same argument is used by people to justify searching for bigfoot, perpetual motion machines, and the area 51 alien.

Erm, you just need to search wisely, instead of randomly jump really far from what has been well-characterized. If you

don'tbelieve there could be fantastic surprises just around the bend, then if you're Douglas Osheroff and see weird data from your cryogenic experiment, you might dismiss it and fail to discover superfluidity in Helium-3. Then you wouldn't get the Noble Prize.> Which is why I am going to try, once again, to depart from this discussion.

Oh well.

Turing Machines, DefeatedFinally, a Problem That Only Quantum Computers Will Ever Be Able to Solve

Here is a computation a Turing machine can't compute:

A physical process is measured and provided as input at each time tick

Call this input xi(t); one would write n variables of it as x0(t), x1(t), x2(t), ... x[n-1](t)

Function Fi(xi(t)) is to be computed on the streaming input.

Time is divided into intervals, each the duration of one unit. It takes one time unit to read xi(t), and it also takes one time unit to compute Fi(xi(t)).

The problem to be solved is computing Fi(xi(t)) over the interval 0<= i <= n-1

A Turing Machine fails to compute all of the Fi as desired.

Consider, xo(t) is read initially and placed onto the tape.

It follows that Fo(xo(t)) can then be computed correctly.

However, when the next variable, x1, is to be read, the time variable would have changed from t to t+1,

and we obtain x1(t+1) and not x1(t). Continuing in this fashion, x2(t+2), x3(t+3), ..., x[n-1](t+n-1) are read from the input.

Since the process by which each xi changes with time is not known, it is impossible to recover xi(t) from xi(t+i) for i=1, 2, 3, ... , n-1.

Consequently, this approach cannot product F1(x1(t)), F2(x2(t)), ... F[n-1](x[n-1](t)) as required.

@Publius:

> Here is a computation a Turing machine can't compute:

Here's another:

https://en.wikipedia.org/wiki/Halting_problem

And a few more:

https://en.wikipedia.org/wiki/List_of_undecidable_problems

Human Mind@Ron:

>Here's another:>https://en.wikipedia.org/wiki/Halting_problemIf you extend a Turing Machine such that each step takes only half the time as the prior step, then the halting problem can be solved in 2 time units.

And a few more:https://en.wikipedia.org/wiki/List_of_undecidable_problemsThe difference, of course, is that the algorithm I described can be computed -- just not by a Turing Machine.

The Myth of Universal Computation

Nonuniversality Explained

The empty brain [your brain is not a computer]

Is the Brain's Mind A Computer Program? John R. Searle

Chinese room argument John R. Searle

Is the Brain a Digital Computer? John R. Searle 2002

> If you extend a Turing Machine such that each step takes only half the time as the prior step, then the halting problem can be solved in 2 time units.

And if you extend a TM so that it can perform a step at every point in time expressible as a rational number of seconds then it can "solve" the halting problem in an arbitrarily small period of time. But that's just not very interesting.

> the algorithm I described can be computed, just not by a Turing Machine

No, that's not true. You stipulated:

> the process by which each xi changes with time is not known

So you have simply begged the question. You've assumed an unknown process and then just *proclaimed* without proof that a TM can't compute it. In fact a TM can compute your unknown process. Read: https://www.mathpages.com/rr/s9-07/9-07.htm and pay particular attention to the passage that begins, "In fact, assuming the digits of the two transcendental numbers p and e are normally distributed..."

Sic Transit Gloria Mundi@Ron

>So you have simply begged the question. You've assumed an unknown process and then just *proclaimed* without proof that a TM can't compute it. In fact a TM can compute your unknown process.Interesting that you should focus on the input xi, which appears to me to be the least interesting part of the argument. xi can just be measurements from a random process -- say the count of radioactive decay over the interval t to t+e. The use of a random process, I believe, is just to preclude pre-computation or post-computation of function F(). Function F() itself could be anything -- say, computing the running average of the measurements.

While the Turing machine is calculating F(), one clock tick goes by. The Turing machine misses the data from that clock tick. The Turing machine only ends up getting data from every other clock tick, t=0, 2, 4, 6, 8, ... . The Turing machine's calculation of F(xn(tn)) is incorrect, as it missed half the data.

In contrast, a register machine can read the data from every time tick and compute the function correctly.

@Publius:

Since posting my last response I've had a chance to look at some of the references you provided. (Believe it or not, I really do try to give serious consideration to points of view that are opposed to my own.) I still believe you are wrong, but this deserves a more considered response than will fit in a comment so I'm going to write a top-level post about it. Might take a few days.

In the meantime, you can read this HN thread I found on the topic:

https://news.ycombinator.com/item?id=565259

Post a Comment