1. If a proposition P is known, then P is true

2. If the conjunction P&Q is known, then P is known and Q is known

3. If P is true then it is possible to know P

4. If ~P is a logical tautology, then P is

Then you can prove:

5. If P is true, then P is known

In other words, if 1-4 are true, then all truths are known.

You can digest this result in at least two different ways:

1. It's formal proof of the existence of an omniscient being, i.e. God

2. The conclusion is clearly false, and so at least one of the premises must be false.

If, like me, you choose to cast your lot with option 2 then it makes a fun little puzzle to try to figure out which of the premises is (or are) false. You can read up on all of the different ways that philosophers have tried to resolve Fitch here (with some extra food for thought here). Personally, I think the answer is obvious and simple, and a good example of why modern philosophy needs to take computability and complexity theory more seriously than it does.

If you want to try to work it out for yourself, stop reading now. Spoiler alert.

-------

It seems pretty clear to me that the problematic assumption is #3. There are a lot of ways to argue this, but the one that I find most convincing is simply to observe that the universe is finite while there are an infinite number of potentially knowable truths. Hence, there must exist truths which cannot even be

*represented*(let alone known) in this universe because their representation requires more bits of information than this universe contains. QED.

But the problem actually runs much deeper than that. Notice how I had to sort of twist myself into linguistic knots to cast the premises in the passive voice. I started out writing premise #1 as, "If you know P, then P is true." But that raises the question: who is "you"? The modal logic in which Fitch's proof is conducted is supposed to be a model of knowledge, but it makes no reference to any knowing entities. KP is supposed to mean "P is known" but it says nothing about the knower. So in the formalism

*it is not even possible formulate the statement*, "I know P but you don't." The formalism is also timeless. If KP is true, then it is true for all time. So it is not possible to say, "I learned P yesterday." If you start with an agent-free and time-free model of knowledge then it's hardly surprising that you end up with some weird results because what you're reasoning about is some mathematical construct that bears no resemblance at all to the real world.

Real knowledge is

*a state of an agent at a particular time*, which is to say, it is a statement about

*physics*. If I say, "I know that 1+1=2", that is a statement about the state of my brain, a physical thing, and more to the point, a computational device. Hence the theory of computation applies, as does complexity theory, and my knowledge is constrained by both. So premise 3 is not only false, but it is

*provably*false, at least in this physical universe.

That would be the end of it except for two things: First, it is actually possible to carry out the proof with a weaker version of the third premise. Instead of "all truths are knowable" you can instead use:

3a: There is an unknown, but knowable truth, and it is knowable that there is an unknown, but knowable truthand still get the same result that all truths are known. That formulation seems much more difficult to dismiss on physical grounds. I'll leave that one as an exercise, but here's a hint: think about what 3a implies in terms of whether the state of knowledge in the universe is static or not. (If you really want to go down this rabbit hole you can also read up on possible-world semantics of modal logics.)

Second, there is this objection raised by Luke in our original exchange:

But one can just restrict the domain to those truths we think are knowable and re-state the entire paradox. When restricted to knowable truths, the axioms lead to the conclusion that all knowable truths are already known. Surely you don't wish to accept this conclusion?I'm going to re-state that with a little more precision:

One can just restrict the domain of the model of your modal logic to those truths that are tractably computable. Because the proof itself is formal, it is still logically valid under a change of model domain. When restricted to tractably computable truths, the axioms lead to the conclusion that all tractably computable truths are already known.Again, I'll leave figuring out why this is wrong as an exercise. Hint: look at the axioms of the modal logic of knowledge and think about whether or not the domain of tractably computable truths really is a valid model for them.

## 44 comments:

@Ron:

4. If ~P is a logical tautology, then P is falseShouldn't this be "if ~P is a logical tautology, then it is impossible to know P"?

@me:

Shouldn't this be "if ~P is a logical tautology, then it is impossible to know P"?Sorry, misremembered from the Wikipedia article. Shouldn't it actually be

"If ~P is a logical tautology, then P is not possible?"

THe point is that in modal logic, "P is not possible" and "P is false" are not the same; the latter only asserts that P is false in the particular possible world being considered, while the former asserts that P is false in all possible worlds.

@Ron:

I'll leave figuring out why this is wrong as an exercise.I don't want to post answers to the homework, but just to clarify the exercise: wouldn't it have to boil down to showing how, in a model restricted to only the tractably computable truths, at least one of the four postulates 1. through 4. that you list does not hold?

@Ron:

Instead of "all truths are knowable" you can instead use:

3a: There is an unknown, but knowable truth, and it is knowable that there is an unknown, but knowable truth

and still get the same result that all truths are known.

I'm confused about this one too. Isn't "there is a truth that is knowable, but not known" inconsistent with "all truths are known"? (Or even "all knowable truths are known"?)

@me:

Shouldn't it actually be

"If ~P is a logical tautology, then P is not possible?"

Looking at this further, I think this premise, (D), is actually where I see the first problem with the proof. As you and I are stating it here, the antecedent requires not P to be a tautology--which to me means not P must follow solely from the properties of the logical connectives.

The Wikipedia article states the antecedent as "if P can be proven false without assumptions", which does not appear to be the same thing, at least as it is used in the proof given there. In the proof, rule (D) is used to go from line 6 to line 7, but the proposition whose negation appears as line 6 was not proven solely from the properties of logical connectives: premises (A) and (B) had to be used to prove it. Don't those count as "assumptions"? If they do, then the use of premise (D) to go from 6 to 7 seems unjustified. But if (A) and (B) don't count as "assumptions", then how are "assumptions" being defined?

Restating the step from line 6 to line 7 in ordinary language gives another way of seeing the issue. Line 6 reads as

"It is not known that P is both true and unknown."

Line 7 reads as

"It is not possible to know that P is both true and unknown."

It seems like this step has already done the important work, by conflating "it is not known" and "it is not possible to know". And it seems to me that that step is not justified for the proposition given.

So I think premise (D) needs to be restricted to actual tautologies, i.e., propositions which truly follow purely from the properties of logical connectives; allowing it to be used as it is used in the proof here seems to me to be making it too strong.

> Shouldn't it actually be "If ~P is a logical tautology, then P is not possible?"

Right. My mistake. ("P is false" follows from "P is not possible", but, as you correctly point out, they are not the same.)

> I don't want to post answers to the homework, but just to clarify the exercise: wouldn't it have to boil down to showing how, in a model restricted to only the tractably computable truths, at least one of the four postulates 1. through 4. that you list does not hold?

Not necessarily. It is also possible that one of the rules of inference in the modal logic you're using is no longer valid.

> I'm confused about this one too. Isn't "there is a truth that is knowable, but not known" inconsistent with "all truths are known"?

Well, yeah. That's why it's a paradox.

> The Wikipedia article states the antecedent as "if P can be proven false without assumptions", which does not appear to be the same thing

That just the definition of "tautology", something which can be proven true without assumptions.

> the proposition whose negation appears as line 6 was not proven solely from the properties of logical connectives: premises (A) and (B) had to be used to prove it. Don't those count as "assumptions"?

No, because by the time you get to line 6 those assumptions have been discharged. Assumptions that are made for a reductio proof are provisional. They have a "lexical scope" that ends at the last step.

@Ron:

It is also possible that one of the rules of inference in the modal logic you're using is no longer valid.Ok, but how are those rules of inference used in the proof given in the Wikipedia article? I don't see them playing any role at all. The article itself says: "The proof uses minimal assumptions about the nature of K and L", and I don't see any of the rules of inference of modal logic being part of those minimal assumptions. If they are, I'd like to understand how.

Well, yeah. That's why it's a paradox.No, the point I was making is that (C'), unlike (C), appears to contradict the conclusion "all truths are known". But I see on further consideration that (C') plays a different role in the argument than (C) does. (C) is assumed as a premise and used to go from line 6 to line 7 in the proof given in the article. But (C'), if it is used, is adopted as an assumption that will end up being discharged by reductio ad absurdum. So it's not playing the same kind of role that (C) does, and the fact that it contradicts the conclusion is intended.

That just the definition of "tautology", something which can be proven true without assumptions.I don't think that's correct unless "assumptions" means "anything other than the properties of logical connectives". As I said, my understanding of "tautology" is that it is a statement that is true purely because of its logical form, without having to make any assumptions about the truth or falsity of individual propositions that appear in it. Premises (A) and (B) in the Wikipedia article are assumptions about the truth or falsity of particular propositions, and line 6 in the proof requires those assumptions; the proposition in line 6 is not true purely because of its logical form.

Even if you didn't actually mean that by "tautology", what I am proposing is that premise (D) *should* be interpreted to require that meaning of "tautology", i.e., you can only deduce not Lp from not p if not p is a tautology in the sense I just gave, i.e., true purely because of its logical form.

by the time you get to line 6 those assumptions have been discharged.The assumption made in line 1 has been discharged, yes. But (A) and (B) have not; and what I am proposing is that they *should* count as "assumptions", so that the step from line 6 to line 7 is not a valid application of (D) as I am proposing it should be interpreted.

> Ok, but how are those rules of inference used in the proof given in the Wikipedia article? I don't see them playing any role at all.

Both I and the Wikipedia article are playing a little fast and loose with our terminology and notation, particularly with the word "assumption." (The Plato article has a more precise presentation.) This because under certain circumstances (which happen to be very common) there is an equivalence between a rule of inference:

P ⊢ Q

and a conditional proposition

P -> Q

The latter is a formal statement *within* the logic, while the former is a meta-logical statement rendered in fancy symbols that simply means, "If P is a theorem then Q is a theorem." (Sometimes you will see this written as ⊢P ⇒ ⊢Q, but this can get confusing. In particular, it is important not to conflate ⇒ with ->. They are two different symbols that have very similar but not identical meanings.)

P⊢Q and P->Q are not equivalent, but they are very closely related. In particular, if you accept modus ponens:

P, P->Q ⊢ Q

(informally: if P is a theorem and P->Q is a theorem then Q is a theorem) and conditional proof:

P

---

Q ⊢ P->Q

(informally: if by assuming P is a theorem you can show that Q is a theorem then P->Q is a theorem) then there is a formal equivalence between P ⊢ Q and P -> Q, i.e. the theorems of a logic that includes MP and CP and P ⊢ Q are exactly the same as the theorems of a logic that includes MP and CP and has P->Q as an axiom.

Wikipedia presents its first three "modality rules" in the form of three axioms and one inference rule. It could have presented them all as inference rules. (Rule D cannot be converted to an axiom because there is no way to say "without assumptions" within the logic. Within the logic, a string is either a theorem, or it is not.)

I apologize for adding to the confusion by calling these "assumptions" in the OP. They are assumptions in the sense that you have to informally assume that the informal renderings of the formal statements are in some sense true (or at least plausible) otherwise the puzzle isn't very interesting.

Note that the proof also "assumes" another rule of inference (called reductio):

P

---

Q&~Q ⊢ ~P

Informally: if by assuming P is a theorem you can derive a contradiction, then ~P is a theorem.

You can prove that all of these things are "equivalent" in some sense, i.e. adding reductio to a logic that includes some minimal repertoire of other rules of inference (I forget exactly what those are) doesn't change its theorems, so in some sense reductio is just a "macro" that shortens proofs.

All of these things are generally tacitly assumed because being really rigorous about all this gets tiresome rather quickly, and detracts from the main point, which is that modal logic is just not a good model of what happens in the real world when some agent knows something.

> But (C'), if it is used, is adopted as an assumption that will end up being discharged by reductio ad absurdum.

No, C' really is just an axiom that means more or less what the informal rendering in the article says (there's a little bit of a problem with the scoping of the existential variable, but that's getting too deeply into the weeds). The fact that you can derive a contradiction from it simply means that (as noted in the article) the axiom is false, i.e. its negation is a theorem, and hence it is not the case that there exists an unknown but knowable truth.

If you think about it, this actually makes perfect sense because standard modal logic has no notion of time. In a timeless world where nothing ever changes, if a truth is unknown then it is in fact unknowable.

@Ron:

Wikipedia presents its first three "modality rules" in the form of three axioms and one inference rule. It could have presented them all as inference rules.Yes, I see that.

If you think about it, this actually makes perfect sense because standard modal logic has no notion of time. In a timeless world where nothing ever changes, if a truth is unknown then it is in fact unknowable.So basically, your position is that, if we try to construct a model that includes the notion of time, it won't satisfy all of the modal logic rules of inference?

> So basically, your position is that, if we try to construct a model that includes the notion of time, it won't satisfy all of the modal logic rules of inference?

Well, yes, obviously. You don't even need Fitch's paradox to see that. As I pointed out in the OP, in standard modal logic you can't even *say* "A knows P but B does not" or "A learned P at time T" or "A knew P but then forgot it."

The question of what happens when you try to extend modal logic to include agency and time turns out to be really hard. It's an open research issue (and widely believed to be AI-complete). The question on the table here is a simpler one: what happens if you try to extend modal logic just to include the (global) notions of computability and/or tractability?

@Ron:

The question on the table here is a simpler one: what happens if you try to extend modal logic just to include the (global) notions of computability and/or tractability?Ok, but then we're back to my earlier question: if Fitch's paradox is no longer valid when we extend modal logic to include those notions, doesn't that imply that, with those notions included, at least one of the four rules (A), (B), (C), or (D) must be invalid? I say "rules" to account for your point that really those statements are not assumptions, they're rules of inference; and my point is that I don't see any of the modal logic rules of inference (the ones described in the "Epistemic modal logic" Wikipedia article you linked to) being used in the proof of Fitch's paradox. Other than (A) through (D), the only rules I see being used are conjunction elimination, reductio ad absurdum, and the classical tautology that says that "not (p & not q)" is equivalent to "p -> q". None of those are modal logic rules of inference. So even if the modal logic rules of inference turn out to be invalid if you extend the model to include computability and/or tractability, how does that affect the proof of Fitch's paradox?

> if Fitch's paradox is no longer valid when we extend modal logic to include those notions, doesn't that imply that, with those notions included, at least one of the four rules (A), (B), (C), or (D) must be invalid?

...

> even if the modal logic rules of inference turn out to be invalid if you extend the model to include computability and/or tractability, how does that affect the proof of Fitch's paradox?

Ah, I think I see why you're confused. My fault. I wrote:

> Not necessarily. It is also possible that one of the rules of inference in the modal logic you're using is no longer valid.

But A-D *are* the modal rules of inference.

I should have said: "Not necessarily. There are other possibilities, like that propositional or first-order logic might break." (The fact that you can encode the Peano axioms as first-order logic should actually make you take that second possibility fairly seriously.)

But yes... if you find yourself able to prove something like, "If P is tractably computable then P is known" (or some variation on that theme) and you don't believe that this is actually true (it pretty clearly isn't), then something in your reasoning is failing to faithfully model reality. The puzzle is to figure out what that is.

@Ron:

A-D *are* the modal rules of inference.

I should have said: "Not necessarily. There are other possibilities, like that propositional or first-order logic might break."

Ah, ok, got it.

@Ron:

First, thanks for paying this a bit more attention.

> I'm going to re-state that with a little more precision:

>

> >> One can just restrict the domain of the model of your modal logic to those truths that are tractably computable. Because the proof itself is formal, it is still logically valid under a change of model domain. When restricted to tractably computable truths, the axioms lead to the conclusion that all tractably computable truths are already known.

>

> Again, I'll leave figuring out why this is wrong as an exercise. Hint: look at the axioms of the modal logic of knowledge and think about whether or not the domain of tractably computable truths really is a valid model for them.

Not being schooled in formal logic, the way I think about this is via the difference between complex numbers and reals: if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals. However, if I somehow guarantee that all the roots are real, it is consistent to restrict the domain to the reals—even though the reals are not algebraically closed.

I see your requiring a complete model restricted to tractable truths as tantamount to requiring that the domain be algebraically closed. And yet, if

pin steps 1. and 8. at WP: Fitch's paradox of knowability § Proof are guaranteed to be tractable, where does the equivalent of a possibly complex root enter? You objected to the following rule:(C)

p→LKpall truths are knowable.The

onlyplace that rule is applied is here:8. Suppose

p& ¬Kp9.

LK(p& ¬Kp) from line 8 by rule (C)So, unless somehow we can have this situation—

pis tractablep& ¬Kpis intractable—I don't see how I am implicitly relying on a single intractable truth. I don't care about every single theorem which can be derived in model logic; I don't need to ensure that restricting the domain to tractable truths works for those. I just care about

this particular instance.I'm reminded of the invention of Fourier analysis, which mathematicians originally lambasted because it was not initially set on firm mathematical foundations. It was not established in exactly which cases you can do what Joseph Fourier did in modeling heat flow. Fourier was wrong when he stated that any function of a variable can represented by ∑a⋅sin(xⁿ), but he was right in the particular cases he was studying. The claim that "in general, Fourier was wrong" ⇏ "in his particular cases, Fourier was wrong". It is just

wrongto strengthen a person's argument, find the strengthened version to be false, and thus conclude that the original argument is false. (In Fourier's case, he improperly generalized; I have done no such thing.)@Luke:

Your analogy with polynomials is not bad, but you made one major mistake:

> if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals

I don't know what you mean by "presuppose that I know all its roots" but there is absolutely nothing that stops you from restricting the domain of a polynomial to anything you want. The only thing that changes when you choose a domain that is not algebraically closed is that you lose the nice result that a polynomial of degree n has n roots. If you restrict to the reals, that result simply changes to be <= n roots. (You can get better results than that. For example, on the reals, odd-degree polynomials always have at least one root, but on the rationals they can have zero roots.)

> You objected to the following rule:

That's right, because it does not accurately model the real world if the domain of P is all truths. But if the domain of P is all tractable truths, then it DOES accurately model the real world -- that's the *definition* of tractability!

But when you do this, something else breaks. It's not something in the *logic*. The proof is still sound. It has to be -- nothing in the formal logic has changed, only our model, our interpretation of what the symbols *mean* has changed. So there are only two possibilities:

1. The conclusion is in fact correct in our world when you restrict the domain to tractable truths, or

2. Something about the model is false is our world

Hint: it's #2.

Just to try to avoid some confusion:

> 2. Something about the model is false is our world

Make that:

2. Something about the model is STILL false in our world EVEN IF you restrict the domain to tractable truths

@Ron:

> I don't know what you mean by "presuppose that I know all its roots"

I meant presuppose that for polynomial of degree N, I know N roots in my chosen domain D. If I say that D = ℝ, then I have a potential contradiction without somehow ensuring that all of those roots are in ℝ.

> But when you do this, something else breaks. It's not something in the *logic*. The proof is still sound.

That's not what you led me to believe in our email chain where you declared me "bordering on uncoachable". And I think you were on to something: I haven't proved that 'tractable truths' can form a model that allows you do any and all modal logic on it. Fortunately, I don't think I needed such a robust beast, for the limited thing I was trying to do.

> 2. Something about the model is false is our world

I get that the model doesn't allow multiple agents and doesn't incorporate time, but I don't see why that is an immediate problem. Tractable-Fitch still destroys the distinction between "possibly I could know tractable truth

p" and "actually I know tractable truthp". Formally, for tractable truthp:(

p→LKp) → (p→Kp)When we restrict things to just one knower at just one time, this problem rears its head. This seems to imply that with the given rules of inference, it is impossible "trace the boundaries of one's ignorance". Furthermore, with the given rules of inference, learning something is

illogical.This is all interesting to the extent that this logic is used in areas where there is supposed to be a genuine distinction between what is potentially knowable and what is in fact known. So for example, maybe we shouldn't always presuppose the following:

(B)

K(p&q) → (Kp&Kq) knowing a conjunction implies knowing each conjunct.> I meant presuppose that for polynomial of degree N, I know N roots in my chosen domain D. If I say that D = ℝ, then I have a potential contradiction without somehow ensuring that all of those roots are in ℝ.

That still makes no sense. You originally said:

> if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals

But of course you can. Consider x^2 + 1 = 0. It has two complex roots, i and -i. If I want to restrict the domain to the reals then it has no roots.

> That's not what you led me to believe in our email chain where you declared me "bordering on uncoachable".

I don't want to go back and re-hash all that, but one of your problems is that you are very sloppy in your phraseology, and this makes it very easy to misinterpret what you say (see above). It wasn't clear to me at the time that you wanted to restrict the model domain. I thought you wanted to try to prove something like:

P & TRACTABLE(P) -> KP

> Tractable-Fitch still destroys the distinction between "possibly I could know tractable truth p" and "actually I know tractable truth p".

That's right.

> When we restrict things to just one knower at just one time, this problem rears its head.

What problem? It's simply true: everything you could possibly know at a given time, you do in fact know at that time.

@Luke:

So, unless somehow we can have this situation—

p is tractable

p & ¬Kp is intractable

Which, indeed, would be a situation in which the logical argument would fail. And I don't see what is impossible or even implausible about the condition. Certainly you have given no proof that such a situation cannot occur.

@Peter:

That's actually not a valid criticism. Since Luke is the one defining the domain, he gets to define "tractable" however he likes.

In fact, the whole "tractability" thing kind of becomes a red herring when you apply it to the model domain (as opposed to trying to model it within the logic, which is what I originally thought Luke was proposing). You could apply *any* predicate to the model domain and get the same result. Fitch's "paradox" is "true" for all truths, all tractable truths, and even all snozulatable truths, so long as Fitch's axioms hold for snozulatable truths, and are themselves snozulatable truths.

But of course, none of that is particularly interesting. What is interesting is whether the domain of snozulatable truths has any bearing on reality, and to figure *that* out you have to get a more detailed grip on what snozulatable (or tractable) actually means, as well as what it actually means to "know" something. And *that* is where things start to fall apart for Fitch.

Fitch's "paradox" is kind of like non-euclidean geometry. It's true for some kinds of worlds that we can imagine. It just isn't true of the world we happen to live in. The puzzle is simply to identify, for any given variant of Fitch, where the mismatch is between what it models and our reality. For the original version, that mismatch was the assumption that all truths are knowable (because in our world they are not). You can "fix" that by restricting the model domain to tractable truths, which by some suitable definition of "tractable" make it so that they are all knowable. And then the problem shifts somewhere else (like time).

It's just not that complicated. The bottom line is that not all mathematical models actually apply to the real world, even if they are based on premises that might seem intuitively plausible.

@Ron:

That's actually not a valid criticism. Since Luke is the one defining the domain, he gets to define "tractable" however he likes.But of course, none of that is particularly interesting. What is interesting is whether the domain of snozulatable truths has any bearing on realityI was assuming that by "tractable" Luke was trying to capture some predicate that actually has a bearing on reality. For a predicate that does have a bearing on reality, I don't think that all of the propositions appearing in the proof will fall within the domain covered by the predicate.

And then the problem shifts somewhere else (like time).I'm actually not convinced that leaving out time, in itself, is the root of the problem. After all, it's not difficult to include time, and even multiple agents, in the definitions of the modal operators. For example:

Define a family of "knowledge" operators K(x, t) such that K(x, t)p means that agent x knows proposition p at time t. Then the obvious way to define the modal operator K that appears in Fitch's proof is to define Kp as the proposition: Ex,Et:K(x, t)p--i.e., proposition p is known by some agent at some time.

The problem comes with trying to differentiate this operator from the modal operator LK, since the most obvious way to define LK--the "knowable" operator--is exactly the same as the definition I just gave above for K. In other words, the implicit definition of "knowable" that is doing the work behind the scenes is: "a proposition p is knowable if it is known by some agent at some time". And of course, this makes "knowable" equivalent to "known" once you actually look under the hood.

@Peter:

> I was assuming that by "tractable" Luke was trying to capture some predicate that actually has a bearing on reality.

I don't think you can assume such things. Surely it cannot have escaped your notice that Luke's thoughts are often detached from reality.

Also, even within the confines of reality, "tractable" has a wide range of possible meaning.

> the obvious way to define the modal operator K that appears in Fitch's proof is to define Kp as the proposition: Ex,Et:K(x, t)p--i.e., proposition p is known by some agent at some time.

> the most obvious way to define LK--the "knowable" operator--is exactly the same as the definition I just gave above for K.

Possibility is usually defined in terms of possible worlds. LK would mean: in some possible world (not necessarily our world) some agent knows P at some time. The Fitch becomes: if P is tractable, then some agent in *this* world will know P.

> this makes "knowable" equivalent to "known" once you actually look under the hood.

Yes, I think this is exactly right.

One other point that hasn't been brought up yet in the context of tractability: take a look at premise D:

~P ⊢ ~LP

and think about what this means when you substitute KQ for P. (And also observe that in Fitch's proof, this is exactly how premise D is in fact used.)

@Ron: (1/3)

> Ron: Since Luke is the one defining the domain, he gets to define "tractable" however he likes.

> Peter Donis: I was assuming that by "tractable" Luke was trying to capture some predicate that actually has a bearing on reality.

> Ron: I don't think you can assume such things. Surely it cannot have escaped your notice that Luke's thoughts are often detached from reality.

The person whose thoughts are currently detached from reality are yours, Ron. Here's what I wrote in the previous thread:

> Luke: Your objection to WP: Fitch's Paradox of Knowability § Proof was that (C)

w→LKw, aka "all truths are knowable", is false. (I changed the variable name for clarity, below.) Some truths are uncomputable, after all. You mentioned Chaitin's constant. Other truths are computable but not tractable; it takes time and energy to know and you believe the universe to be finite. But all of this is only relevant if thewused for (C) falls into either class. If in fact anywused is tractable (say, can be computed within the next 200 years, barring civilization collapse), then I don't see how it is a problem.To make it blindingly obvious:

> Luke: is tractable (say, can be computed within the next 200 years, barring civilization collapse)

That appears to "[have] a bearing on reality"; do you disagree?

@Ron: (2/3)

> Luke: Not being schooled in formal logic, the way I think about this is via the difference between complex numbers and reals: » if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals «. However, if I somehow guarantee that all the roots are real, it is consistent to restrict the domain to the reals—even though the reals are not algebraically closed.

> Ron: I don't know what you mean by "presuppose that I know all its roots" but there is absolutely nothing that stops you from restricting the domain of a polynomial to anything you want. The only thing that changes when you choose a domain that is not algebraically closed is that you lose the nice result that a polynomial of degree n has n roots. If you restrict to the reals, that result simply changes to be <= n roots.

> Luke: I meant presuppose that for polynomial of degree N, I know N roots in my chosen domain D. If I say that D = ℝ, then I have a potential contradiction without somehow ensuring that all of those roots are in ℝ.

> Ron: That still makes no sense. You originally said:

>

> > if I have an arbitrary polynomial and presuppose that I know all its roots, I cannot simultaneously insist that the domain is only the reals

>

> But of course you can. Consider x^2 + 1 = 0. It has two complex roots, i and -i. If I want to restrict the domain to the reals then it has no roots.

The ambiguity was this:

1. "all its roots [in all domains where roots can be found]"

2. "all its roots [in some particular domain]"

Given that the very analogy turned on a possible mismatch of domains, 2. should have been ruled out. But in the case that wasn't absolutely clear, there was my use of "algebraically closed". From the Wikipedia article I linked:

>> In abstract algebra, an algebraically closed field

Fcontains a root for every non-constant polynomial inF[x], the ring of polynomials in the variablexwith coefficients inF. (WP: Algebraically closed field)But perhaps you just completely failed to understand the analogy and did your standard thing of ignoring anything I've said which is contradictory with the stupid interpretation you managed to wrangle from what I wrote. Fortunately, Peter

doesunderstand the situation:> Luke: So, unless somehow we can have this situation—

>

>

pis tractable>

p& ¬Kpis intractable> Peter Donis: Which, indeed, would be a situation in which the logical argument would fail. And I don't see what is impossible or even implausible about the condition. Certainly you have given no proof that such a situation cannot occur.

That's the potential domain mismatch. It could be that rule (C) has to reach into the domain of intractable truths. I don't see how, but Peter is right that I haven't proven it couldn't possibly do so.

@Ron: (3/3)

> One other point that hasn't been brought up yet in the context of tractability: take a look at premise D:

>

> ~P ⊢ ~LP

>

> and think about what this means when you substitute KQ for P. (And also observe that in Fitch's proof, this is exactly how premise D is in fact used.)

In The Knowability Paradox, Jonathan Kvanvig writes that differently:

□~

p⊣⊢ ~◇pWikipedia's English description captures the □ ("if

pcan be proven false without assumptions"), but its formalism does not. If we call Kvanvig's version (D′), the following becomes invalid:suppose ¬

K(p)¬

LK(p) by rule (D′)¬

K(p) → ¬LK(p)The whole point is to have ¬

Kp&LKp: "I don't knowpbut possibly I could knowp."> What problem? It's simply true: everything you could possibly know at a given time, you do in fact know at that time.

My understanding of the paradox is that it allows for you to not have carried out all the logical operations quite yet, and more logical operations do not require more time. And yet, once you can show that you possibly know

p, you actually knowp.[Reposting to fix a typo]

@Luke:

> more logical operations do not require more time

I am genuinely at a loss how to respond to this, particularly in light of:

> The person whose thoughts are currently detached from reality are yours, Ron.

One of us is clearly detached from reality. It might be me. As I've said before, that is not the sort of thing that yields readily to introspection.

But AFAICT, in the world I live in, computation requires both non-zero time and non-zero energy. Maybe your world is different.

> once you can show that you possibly know p, you actually know p.

In a world where computation takes zero time, this is in fact true. (But in a world like that, the whole notion of tractability doesn't really make a whole lot of sense, at least not to me.)

@Ron:

> > more logical operations do not require more time

>

> I am genuinely at a loss how to respond to this

That's surprising, given that there is no explicit notion of time during steps 1. – 11. of WP: Fitch's paradox of knowability § Proof. And I was answering

from within that modal logic, where there is no time. You yourself admitted this in the OP: "time-free model". Ostensibly, what time lets you do is introduce new propositions at specified points. An fun example of this is the muddy children puzzle.> > once you can show that you possibly know

p, you actually knowp.>

> In a world where computation takes zero time, this is in fact true.

I don't see why it is necessarily true. If you haven't yet churned through all possible logical operations on what you do know, it is not obvious that once you show that possibly you know something, actually you know it. It seems eminently reasonable to suspect that you'd have to do more work to go from "possibly I know

p" to "actually I knowp".> (But in a world like that, the whole notion of tractability doesn't really make a whole lot of sense, at least not to me.)

You're conflating (i) applicability of the paradox to our reality and (ii) internals of the paradox. If the paradox requires intractable/uncomputable truths to yield its result, I agree that it would be uninteresting. But if we start with some eminently knowable

pin step 1. of WP: Fitch's paradox of knowability § Proof, I don't see how one could possibly need to rely on "the last ten digits of the next prime number after Graham’s number" for the proof to go through. More formally:(O)

LKp→Kpdoes not depend on

(C) ∀(p)(

p→LKp)Instead, it depends on the more limited:

(C′) ∀(

p&LKp)((p& ¬Kp) →LK(p& ¬Kp))We just ignore all

pwhere ¬LKp. Unless, that is,pcould be tractable (knowable) whilep& ¬Kpis not.> I don't see why it is necessarily true. If you haven't yet churned...

Because in a world where computation takes zero time, it is not possible to have "not yet churned". In a world where computation takes zero time, everything computable can be computed in zero time, and so everything computable must have actually been computed in zero time. Any process that requires more than zero time cannot be computation in a world where computation requires zero time. It is not even clear what "a process that requires non-zero time" could even *be* in a world where computation requires zero time because of universality. It is not even clear that time itself is even a meaningful concept in a world where computation requires zero time. The whole notion is so far removed from physical reality that you might as well be talking about angels on the head of a pin.

@Ron:

> Because in a world where computation takes zero time, it is not possible to have "not yet churned".

The Turing formalism has zero notion of time. It does have the notion of an operation and one can count operations. But it has absolutely no idea what a second is. Shockingly, without having a notion of time in the formalism, computer scientists can still talk about some operation being "harder" than another—that is, taking more operations. They say this

withinthe formalism. Even though there is no time in the formalism, computer scientists aren't mentally forced to think that everything happens all at once and thus you cannot talk about intermediate states.Just like there is no problem talking about intermediate results in the Turing formalism without including the concept of 'time', we can talk about intermediate results in timeless modal logic. In particular, we can talk about performing enough operations to

LKp, but not enough toKp. If you must you can assign some finite amount of time to each operation, but that would confuse, because the modal logic used in Fitch's paradox, as you quite rightly said, is timeless.It seems like you're having trouble switching your thinking from "the real world" to "within the formalism". But this confuses me, because I should think you would be excellent at it. There is the possibility that you are merely fucking with me, but I almost always try to exclude that

a priori.> The whole notion is so far removed from physical reality that you might as well be talking about angels on the head of a pin.

Then I would say the same about Turing machines, because they cannot understand time, but instead only number of operations.

@Luke:

> you cannot talk about intermediate states

> there is no problem talking about intermediate results

Which is it?

> Then I would say the same about Turing machines, because they cannot understand time, but instead only number of operations.

Yes, of course. Turing machines are a mathematical model. But if you want to apply that model to the real world you have to introduce time because in this world, allowing time to pass is the only way to obtain distinct states. That is the very essence of time.

(Aside, something I meant to point out earlier but forgot: in a world where computation takes zero time, the halting problem is solvable: start the computation, wait one picosecond. If the computation has not halted by then, it never will. This would be an even bigger win than proving that P=NP. You could prove all mathematical theorems in an arbitrarily short time. That is pretty much the essence of Fitch's result.)

@Ron:

You excised just the bold:

> Luke: Even though there is no time in the formalism, computer scientists aren't mentally forced to think that everything happens all at once and thus

you cannot talk about intermediate states.>

> Just like

there is no problem talking about intermediate resultsin the Turing formalism without including the concept of 'time', we can talk about intermediate results in timeless modal logic.> Ron: Which is it?

Ummm, what? I'm saying that just like you can talk about intermediate states within the Turing machine formalism, you can talk about intermediate states within the modal logic used in Fitch's paradox. Were you seriously confused between:

> Luke′: Even though there is no time in the formalism, computer scientists aren't mentally forced to think that { everything happens all at once and thus you cannot talk about intermediate states }.

and

> Luke″: Even though there is no time in the formalism, computer scientists aren't mentally forced to think that { everything happens all at once } and thus { you cannot talk about intermediate states }.

? Luke″ makes no sense: if you aren't forced to think that { everything happens all at once }, then { you cannot talk about intermediate states } is obviously wrong. Will you be honest and tell me whether you're just fucking with me, Ron? I don't see how toying with people is at all compatible with what you've professed to care about, but this seems like such an obvious, creationist-style quote mine.

> But » if you want to apply that model to the real world you have to introduce time « because in this world, allowing time to pass is the only way to obtain distinct states.

Not all rational justifications require any appeal to time whatsoever, so » this « seems patently false. More importantly, what I was objecting to was this:

> Ron: What problem? It's simply true: everything you could possibly know at a given time, you do in fact know at that time.

This is an utter mismatch to the modal logic used in Fitch's paradox. There, it is surprising (at least to some people, including yours truly) that once you've performed enough logical operations to

LKp, you've also performed enough logical operations toKp. You appear to be jumping between the formalism and our reality in a willy nilly fashion.> … in a world where computation takes zero time …

You're introducing an infinity which seems utterly and entirely irrelevant to Fitch's paradox. It's just a giant red herring.

Within the formalism, there is no time. Time just doesn't exist there. There is no such thing as "all at once", because that is a temporal concept.@Luke:

> Were you seriously confused

Yes. I parsed it as:

(computer scientists aren't mentally forced to think that everything happens all at once) and thus you cannot talk about intermediate states

Because you said:

> The Turing formalism has zero notion of time.

And that is not true. Turing machine states are ordered sequences. Furthermore, the TM formalism is not an abstract one. It is explicitly a model of an idealized but nonetheless *physical* machine with a *physical* tape that actually moves in space, which (qua Einstein) requires time.

Modal logic has none of that. Its model is static.

@Ron:

> Yes. I parsed it as:

>

> > (computer scientists aren't mentally forced to think that everything happens all at once) and thus you cannot talk about intermediate states

Yes and

that makes no sense. Whenever I read something someone else says and it makes no sense to me, I first try and see if there's a nearby, plausible reading which does make sense. And as I showed, there was such a reading. Are you either unwilling or unable to do this thing?> Because you said:

>

> > The Turing formalism has zero notion of time.

>

> And that is not true. Turing machine states are ordered sequences.

They still don't have a notion of

time. Timeless modal logic also has "ordered sequences". It doesn't have a notion oftime. There is no obvious mapping between # of operations and # of seconds.> Furthermore, the TM formalism is not an abstract one. It is explicitly a model of an idealized but nonetheless *physical* machine with a *physical* tape that actually moves in space, which (qua Einstein) requires time.

I have no idea what you mean by 'abstract', but an infinite tape doesn't seem 'realistic' to me. As to the modal logic used in Fitch's paradox, are you going to tell me that it is not a sufficiently good model for

anyhuman reasoning that people go about in their day-to-day lives?Another tact is this: any actual execution of the modal logic used in Fitch's paradox also "requires time". And yet, it is still quite right to say that the formalism itself does not have any concept of time. Nevertheless, we can talk about # of operations and compare different #'s of operations—just like you can do with Turing machines.

> Yes and that makes no sense.

Indeed, but you say a lot of things that make no sense, so this was entirely consistent with your general behavior. For example:

> any actual execution of the modal logic used in Fitch's paradox also "requires time"

There is no such thing as an "execution of a modal logic."

Also:

> an infinite tape doesn't seem 'realistic' to me

Really? Why not? Isn't God infinite? Why couldn't He make an infinite tape?

BTW, the tape isn't actually infinite, just unbounded. Not the same thing. In fact, the very first sentence of Turing's paper is "The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by FINITE means." (Emphasis added.) If you're going to discuss Turing, you really ought to read him.

@Ron:

> Indeed, but you say a lot of things that make no sense

If you consider the following a good example, that says a lot about whether your judgment on this matter is in the slightest bit trustworthy.

> There is no such thing as an "execution of a modal logic."

First, I actually wrote "execution of the modal logic used in Fitch's paradox". Second, from reading what you wrote, I think a normal person would disbelieve that Prolog exists. Unless there's some logical or physical reason that the modal logic employed in Fitch's paradox cannot be put into a programming language? Maybe when Prolog "executes", it's actually a homunculus behind the scenes? Is Linh Anh Nguyen's The Modal Logic Programming System MProlog just nonsense?

> Also:

>

> > an infinite tape doesn't seem 'realistic' to me

>

> Really? Why not? Isn't God infinite? Why couldn't He make an infinite tape?

When you have to bring in God to make something 'realistic', you know you have a problem. (You've destroyed the word 'realistic'—it means anything, now.) I don't recall ever doing something like this to you (in fact I say that understandings of God must have correlates in this reality to be trustworthy at all).

> BTW, the tape isn't actually infinite, just unbounded. Not the same thing.

Correction noted. (When I think "infinite tape", I just think "you can always write more in either direction". So it means the same to me. But I'll accept that it means something different to you, and use your preferred terminology.) But "unbounded" still violates the Bekenstein bound. Given that you told me you think the universe is finite, it is unrealistic

to Ronfor Turing machines to be able to violate this finitude with a greater finitude.And this is actually just a distraction from the fact that the modal logic used in Fitch's paradox is applicable to situations which don't require indexing by agent or time. And it is obvious to the most casual observer that you can count the number of logical steps taken in modal logic just like you can count the number of operations executed by a Turing machine. There is therefore absolutely no problem saying that: it is surprising that the # of steps to yield

LKpis the same as the # of steps to yieldKp†.† If you really want to be nit-picky, it takes one additional step: application of Fitch's result that

LKp→Kp. Seeing as this is obviously not additional work done toward any intuitive notion of what it takes toKp, it does not count as "more work toKp". I think it is rather intuitive to expect that one would need to do more non-Fitch's-paradox work toKpthan to merelyLKp.@Luke:

> I don't recall ever doing something like this to you

From private correspondence, March 3 this year:

"It's a pretty trivial deduction that if we want God and God is infinite, we must want infinitely. If we do not want infinitely, we do not want God. But the desire for infinity cannot originate in us (because we are finite); it can only occur as a call from God which we can heed or ignore."

> When I think "infinite tape", I just think "you can always write more in either direction".

Well, yes, that is what it means. But at any given point in the computation, the size of the part of the tape that has actually been used is finite. So you don't need an infinite tape. You just need an unbounded capacity to manufacture more tape as needed. Economic constraints come into play long before fundamental physical ones.

> So it means the same to me.

Well, this is part of your problem. Unbounded and infinite are not the same, and if you proceed on the assumption that they are then you will draw false conclusions. If you want people to take you seriously, you can't employ a private vocabulary. You have to use words -- especially technical words -- to mean what they actually mean.

Furthermore, the distinction between unbounded and infinite is *central* to the theory of computation. It corresponds to the difference between halting and non-halting, which is the foundation of the entire theory. So this is not a trivial mistake. It is a fundamental mistake, and everything that follows from conflating the two is sheer nonsense.

> Unless there's some logical or physical reason that the modal logic employed in Fitch's paradox cannot be put into a programming language?

As I said earlier, there is a correspondence between logic and computation. For any proof in any formal system there exists a corresponding program that halts IFF the proof is valid, and for every program that halts there is a corresponding proof in some formal system that it halts. (Note that the converse is obviously not true because if it were you could solve the halting problem.)

This does NOT mean that there is a correspondence between the inherently sequential nature of computation and proofs. There isn't. The structure of proofs is fundamentally different. That is what makes this correspondence a non-trivial result, and why Alan Turing is famous for having proved (part of) it.

In particular, the "number of steps" required to conclude a computation is a well-defined concept. It is in fact the very foundation of computational complexity theory, which is one of the most important and interesting branches of mathematics ever discovered. (The question of P=?NP is part of it.) The "number of steps" in a proof is much harder to define in a coherent way, which is why proof complexity theory is not nearly as well developed, and to the extent that it is developed, it relies almost entirely on casting proofs as computations.

@Luke:

"unbounded" still violates the Bekenstein bound.Only if you require that the unbounded tape has to fit inside a fixed finite volume no matter how much tape is occupied. But there's no reason to require that.

@Ron: (1/3)

> Ron: Furthermore, the TM formalism is not an abstract one. It is explicitly a model of an idealized but nonetheless *physical* machine with a *physical* tape that actually moves in space, which (qua Einstein) requires time.

>

> Modal logic has none of that. Its model is static.

> Luke: I have no idea what you mean by 'abstract', but an infinite tape doesn't seem 'realistic' to me. As to the modal logic used in Fitch's paradox, are you going to tell me that it is not a sufficiently good model for any human reasoning that people go about in their day-to-day lives?

> Ron: Also:

>

> > Luke: an infinite tape doesn't seem 'realistic' to me

>

> Really? Why not? Isn't God infinite? Why couldn't He make an infinite tape?

> Luke: When you have to bring in God to make something 'realistic', you know you have a problem. (You've destroyed the word 'realistic'—it means anything, now.)

I don't recall ever doing something like this to you(in fact I say that understandings of God must have correlates in this reality to be trustworthy at all).> Ron: From private correspondence, March 3 this year:

>

> "It's a pretty trivial deduction that if we want God and God is infinite, we must want infinitely. If we do not want infinitely, we do not want God. But the desire for infinity cannot originate in us (because we are finite); it can only occur as a call from God which we can heed or ignore."

Erm, that isn't "realistic", as [I suspect]

youwould define the term "realistic". It is you who wanted things to be sufficiently "*physical*", which I took to be synonymous with "realistic". You were trying to say that the TM formalism is somehow closer to reality than the modal logic used in Fitch's paradox. If you're allowed to call on God to make an [actually, not potentially] infinite tape, then I'm allowed to call on God to make software which can execute the modal logic used in Fitch's paradox.@Ron: (2/3)

> Ron: BTW, the tape isn't actually infinite, just unbounded. Not the same thing.

> Luke: Correction noted. (When I think "infinite tape", I just think "you can always write more in either direction".

So it means the same to me.But I'll accept that it means something different to you, and use your preferred terminology.)> Ron: Well, this is part of your problem. Unbounded and infinite are not the same, and if you proceed on the assumption that they are then you will draw false conclusions. If you want people to take you seriously, you can't employ a private vocabulary. You have to use words -- especially technical words -- to mean what they actually mean.

I suggest consulting Wikipedia:

>> In the philosophy of mathematics, the abstraction of actual infinity involves the acceptance (if the axiom of infinity is included) of infinite entities, such as the set of all natural numbers or an infinite sequence of rational numbers, as given, actual, completed objects. This is contrasted with potential infinity, in which a non-terminating process (such as "add 1 to the previous number") produces a sequence with no last element, and each individual result is finite and is achieved in a finite number of steps. (WP: Actual infinity)

I was speaking in terms of a potential infinity; you saw it as only an actual infinity. The primary way I see 'infinity' talked about is in terms of potential infinity, since the majority attitude seems to be that there are no actual infinities. That being said, I'm happy to use 'unbounded' to be less ambiguous.

Addendum 1: With the possible exception of Ted Nelson, of all the humans I have ever encountered, you have the most trouble with even the slightest bit of ambiguity. It is repeatedly jarring; pretty much every other technically inclined person I've encountered has not required so much precision as you, in conversations like these. They can turn on pedantic mode, but they can also turn it off. You don't seem to be able to turn it off. It is almost as if a thing has to be compilable by your brain with no warnings or errors, or it "makes no sense". If this is how you treat all people, then I can see why you've had so little success in e.g. teaching others EE&R. I am trying to be properly technically correct with you, but your demands are extreme and so it is very hard. Instead of appreciating me saying "Correction noted." and moving on, you have to rub things in. That just slows things down further.

Addendum 2: You're happy to not use technical words according to their meanings when it comes to theology; see for example your use of 'total depravity'.

> Furthermore, the distinction between unbounded and infinite is *central* to the theory of computation.

The distinction between the tape being 'infinite' vs. 'unbounded' fades away if you require the number of operations to be finite. A finite number of operations means you cannot have an actual infinity—only a potential one.

@Ron: (3/3)

> Ron: There is no such thing as an "execution of a modal logic."

> Luke: First, I actually wrote "execution of the modal logic used in Fitch's paradox". Second, from reading what you wrote, I think a normal person would disbelieve that Prolog exists.

Unless there's some logical or physical reason that the modal logic employed in Fitch's paradox cannot be put into a programming language?Maybe when Prolog "executes", it's actually a homunculus behind the scenes? Is Linh Anh Nguyen's The Modal Logic Programming System MProlog just nonsense?> Ron: As I said earlier, there is a correspondence between logic and computation.

You seem to be moving the goalposts, from inability to execute some particular modal logic to the fact that we aren't [yet] very good at getting computers to do proofs. I'm getting the distinct sense of you being the tortoise, in this conversation.

> This does NOT mean that there is a correspondence between the inherently sequential nature of computation and proofs.

All I need is that when a proof has been computationally verified, you can compute something that stands for "amount of work required". Are you saying that is impossible? Note that I don't need to be able to compute Kolmogorov complexity for purposes of my argument. I am merely comparing what it takes to

LKpand what it takes toKp. Is there a reason that something like the 1989 paper On the number of steps in proofs is insufficient for the argument I'm making?> The "number of steps" in a proof is much harder to define in a coherent way, which is why proof complexity theory is not nearly as well developed, and to the extent that it is developed, it relies almost entirely on casting proofs as computations.

There are 11 steps at WP: Fitch's Paradox of knowability § Proof. I will repeat what I said before:

> Luke: It is just

wrongto strengthen a person's argument, find the strengthened version to be false, and thus conclude that the original argument is false.@Peter Donis:

> Luke: But

"unbounded" still violates the Bekenstein bound.Given that you told me you think the universe is finite, it is unrealisticto Ronfor Turing machines to be able to violate this finitude with a greater finitude.> Peter Donis: Only if you require that the unbounded tape has to fit inside a fixed finite volume no matter how much tape is occupied. But there's no reason to require that.

Hence why I included the second sentence, which you snipped. You can take up your "no reason to require that" with Ron.

> I was speaking in terms of a potential infinity; you saw it as only an actual infinity.

No. An unbounded quantity is *finite*. It just doesn't have an upper bound. In other words, I can't tell you ahead of time how many steps a TM will take. But *if* it halts, *then* I can tell you how many it actually took.

> You seem to be moving the goalposts, from inability to execute some particular modal logic to the fact that we aren't [yet] very good at getting computers to do proofs.

No. I'm trying to point out that there is a fundamental difference between a *proof*, which is a static entity, and the *process* of *producing* (or verifying) a proof, which is not. This fundamental difference exists and is salient *despite* the fact that there is a relationship between the static entities and the dynamic processes that produce them. It's like the difference between drawing a circle and the circle itself. They are related, but they are not the same thing. "How long does it take to draw a cirlce?" is a meaningful question. "How long (in terms of time) is a circle?" is not.

> I am merely comparing what it takes to LKp and what it takes to Kp

In what kind of world? A static one, or a dynamic one?

The only way that you can have LKp and ~Kp is in a world where a state can evolve from one where ~KP is true to one where KP is true, i.e. in a world where knowledge can be acquired. That happens to be a characteristic of the world we live in, but it is not a characteristic of the models of the modal logic that Fitch uses. In standard modal logic [1], either KP is true or ~KP is true, and whichever is true is true "for all time" (whatever that could mean in a world without time). That is really all there is to it.

---

[1] This is actually a "feature" of nearly all logics, not just modal ones. This property is called *monotonicity*, and it is one of the reasons that logic turns out to *not* be a good model of human reasoning or our physical world. If you want to use logic to model the interesting parts of reality then at the very least you need a non-monotonic logic.

@Ron:

> > I was speaking in terms of a potential infinity; you saw it as only an actual infinity.

> No. An unbounded quantity is *finite*. It just doesn't have an upper bound. In other words, I can't tell you ahead of time how many steps a TM will take. But *if* it halts, *then* I can tell you how many it actually took.

If you're going to thumb your nose at the Wikipedia article I excerpted that is of course your prerogative; I will remind you that the conversation actually went this way:

> Ron: BTW, the tape isn't actually infinite, just unbounded. Not the same thing.

> Luke: Correction noted.

Apparently, that wasn't enough. It might be worth asking yourself

why. (I say there is a symmetry here between me saying that I've seen 'infinity' used in ways other than what you would prefer, and your "If I were to channel myself a Southern Baptist …". Perhaps you object with a variation on your "highly technical discussion" comment.) But don't go telling stories that I took forever to synchronize on terminology. It took me virtually no time at all.> I'm trying to point out that there is a fundamental difference between a *proof*, which is a static entity, and the *process* of *producing* (or verifying) a proof, which is not.

Haven't we always been talking about the process of producing/verifying? Given that producing can result in a lot of dead ends, perhaps it's best to talk of verifying.

> > I am merely comparing what it takes to

LKpand what it takes toKp> In what kind of world? A static one, or a dynamic one?

>

> The only way that you can have LKp and ~Kp is in a world where a state can evolve from one where ~KP is true to one where KP is true, i.e. in a world where knowledge can be acquired.

I don't see why I am disallowed from saying at step 5. of WP: Fitch's paradox of knowability § proof that I do not yet know that

p→Kp. I wonder if you've been relying on the following rule:(E) ¬

Kp→K¬KpAs far as I know, that is not entailed by the modal logic Fitch uses. The converse is implied by rule (A). I can definitely see that being able to prove

K¬Kpwould make it impossible to later proveKpin any logic I'm acquainted with.As to your claim that adding time solves the problem: is that provably the case? Suppose that

pis not known at timetbut known at timet+1. Can I prove that ILKpat timet? More formally:K¬K_t(p)K_t+1(p)LK_t(p)? Can I prove

LK_t(p), *at timet*? Or do I need to make reference toK_t+1(p)? There's also an ambiguity between "possibly I now knowpbut can't yet prove it" and "possibly I will knowpbut can prove I don't currently knowp".P.S. I just love that the glyph locations of superscript letters and numbers don't match in multiple fonts I've tried, including the one used on your blog: LKₜ₊₁ This is one reason we can't have nice things.

> I don't see why I am disallowed from saying at step 5. of WP: Fitch's paradox of knowability § proof that I do not yet know that p → Kp.

Why do you think you are disallowed from saying that? You can say it all you want. It might even be true, particularly if by "I" you mean "Luke" and not "some arbitrary person".

But that has nothing do to with Fitch's paradox, other than being a concrete example of something you could possibly know that you did not yet know. But such examples are not exactly scarce.

> As to your claim that adding time solves the problem: is that provably the case?

I didn't say that adding time solved the problem. I said adding time was *necessary*. I didn't say it was sufficient. I'm pretty sure it is sufficient, but I don't know. It's possible that someone can produce something like Fitch's paradox in a logic that properly models knowledge acquisition in the real world. I doubt it, but stranger things have happened. If someone manages to do it, I'll probably want to take a look at it. I think an attempt to prove that it's not possible would be more fruitful, but I don't care even remotely enough about it to undertake such an effort. The observation that standard models of modal logic are static certainly resolves the original paradox to *my* satisfaction. YMMV.

Post a Comment