This is part of my series on the scientific method, but it's a bit of a tangent, an interlude if you will, so I'm not giving it a number. As you will see, that will turn out to be metaphorically significant. I'm writing this because my muse Publius raised the problem of infinity in comments on earlier installments in this series, and so I thought it would be worth discussing why these are problematic for mathematics but not for science.
(BTW, the title of this post is an allusion to something I wrote five years ago, which itself was an allusion to something I wrote fifteen years ago. I guess I'm just good at finding trouble.)
There is an old joke that goes something like this: one cave man says to another, "I'll bet you that I can name a bigger number than you." The second cave man responds, "You're on. What's your number?" The first caveman says triumphantly, "Four!" The second cave man thinks for a while and finally says, "You win."
The joke is not just that the second cave man couldn't count to five, but that it was a silly game to begin with because second player can (it would seem) always win by simply taking the number that the first player names and adding one. It seems obvious that you should be able to do that no matter what the first player says because otherwise there would have to exist a counter-example, a number to which it is not possible to add 1, and obviously there is no such counterexample, right?
Well, sort of. There are systems of arithmetic where four actually is the biggest number. For example, in modulo-5 arithmetic it is possible to add 1 to 4, but the result is zero. In modulo-5 arithmetic, four actually is the biggest number.
But this is obviously silly, notwithstanding that modular arithmetic really is a legitimate mathematical thing with lots of practical applications. There is obviously a number greater than four, namely five, the very number we had to deploy to describe the system in which there is no number greater than four. In fact, to describe a system of modular arithmetic whose biggest number is N we have to use a number one bigger than N. So this argument seems self-defeating.
There is another way to construct a system of arithmetic with a biggest number, and that is to simply stipulate that there is a biggest number, and that adding one to this number is just not allowed. Again, this might feel like cheating, but if we are using numbers to count actual physical objects, then there is already a smallest number: zero. So why could there not be a biggest one?
But this still feels like cheating, because if we can name the number that we want to serve as the biggest number, we can obviously (it would seem) name a number that is one more than that. So unlike zero, which is kind of a "natural" choice for a smallest number, there is no apparent "natural" choice for a biggest number. We can try playing tricks like "one more than the biggest number that we can actually name", but that is simply a fun paradox, not an actual number.
So it would appear that logic leaves us no choice but to accept that there is no biggest number, and so we have to somehow deal with apparently inescapable fact that there are an infinite number of numbers. But that leads to problems of its own.
Imagine that you have three buckets, each of which is capable of holding an infinite number of balls. Bucket #1 starts out full of balls while the other two are empty. You now proceed to execute the following procedure:
1. Take three balls out of bucket 1 and put them in bucket 2.
2. Take one ball out of bucket 2 and put it in bucket 3.
3. Repeat until bucket 1 is empty.
That third step should make you a little suspicious. I stipulated at the outset that bucket 1 starts out with an infinite number of balls, and so if you try to empty it three balls at a time it will never be empty. But we can fix that by speeding up the process: every time you go through the loop you have to finish it in half the time you took on the previous step. That will let you perform an infinite number of iterations in a finite amount of time. Again, you need to suspend disbelief a little to swallow the idea of doing every step twice as fast as the previous one, but you needed to do that when I asked you to imagine a bucket that contained an infinite number of balls in the first place, so having to deploy your imagination is already part of the game.
The puzzle is: when you finish, how many balls are in bucket #2?
The "obvious" answer is that there are an infinite number of balls in bucket #2. For every ball that gets removed from B2 and put in B3 there are two balls left behind in B2. So after every step there must be twice as many balls in B2 as B3. At the end there are an infinite number of balls in B3, so there must be even more -- twice as many in fact -- left behind in B2.
And this is our first hint of trouble, because there is no such thing as "twice infinity". If you multiply the number of counting numbers by 2 -- or any other finite number -- the result is equal to (the technical term is "can be put in one-to-one correspondence with") the number of counting numbers.
But now imagine that as we take the balls out of B1 and put them in B2 we mark them to keep track of the order in which we processed them. The first ball gets numbered 1, the second one gets numbered 2, and so on. Now when we pull them out in step 2, we pull them out in order: ball number 1 gets pulled out first, ball #2 gets pulled next, and so on. If we do it this way, then bucket 2 will be EMPTY at the end because every ball will have been pulled out at some point along the way! (In fact, we can modify the procedure to leave any number of balls in bucket 2 that we want. Details are left as an exercise.)
So clearly things get weird when we start to think about infinity. But actually, when dealing with large numbers, things get weird long before we get anywhere close to infinity.
There is a famously large number called a googol (the name of the Google search engine is a play on this). It is a 1 followed by 100 zeros, i.e.:
10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Things are already getting a little unwieldy here. Is that really 100 zeros? Did you count them? Are you sure you didn't miss one or count one twice? To make things a little more manageable this number is generally written using an exponent: 10^100. But notice that we had to pay a price for shortening a googol this way: we lost the ability to add one! In order to write down the result of adding 1 to a googol we need to write 99 zeros:
10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
One could argue for writing 10^100+1 instead, but that doesn't work in general. Consider adding 1 to:
3276520609964131756207215092068230686229472208995125701975064355186510619201918716896974491640125539
which is less than a third of a googol.
But a googol is not even close to the biggest number the human mind can conjure up. Next up is a googolplex, which is a 1 followed by a googol of zeros, i.e. 10^(10^100). Adding one to that without cheating and just writing 10^(10^100)+1 is completely hopeless. There are fewer than a googol elementary particles in our universe (about 10^80 in fact) so it is simply not physically possible to write out all of the digits in a googolplex. Even if we allowed ourselves to re-use material we couldn't do it. Our universe is only about 13 billion years old, which is less than 10^18 seconds. The fastest conceivable physical process operates on a scale called the Planck time, which is the time it takes for light to travel the diameter of a proton, about 10^-43 seconds. A single photon with a cycle time this short would have an energy of about 6.6 gigajoules, a little under two megawatt-hours, the energy equivalent of about one and a half tons of TNT. So even if we could build a computer that ran this fast it could not process the digits of a googolplex in the life span of the universe.
An obvious pattern emerges here: after 10^100 (a googol) and 10^(10^100) (a googolplex) comes 10^(10^(10^100)), a one followed by a googolplex of zeros. That number doesn't have a name, and there's not really any point in giving it one because this is clearly a Sisyphean task. We can keep carrying on forever creating bigger and bigger "power towers": 10^(10^(10^100)), 10^(10^(10^(10^100))) and so on.
What happens if we start to write power towers with large numbers of terms, like 10^10^10^10... repeated, say, 1000 times? To keep those from getting out of hand we have invent yet another new notation. Designing such a notation gets deep into the mathematical weeds which I want to steer clear of, so I'm just going to adopt (a minor variant of) something invented by Donald Knuth called up-arrow notation: A↑B means A^(A^(... <== B times. So 10↑5 means 10^(10^(10^(10^(10^10)))). 10↑10 is already too unwieldy for me to want to type out. But even at 5 iterations it is already challenging to communicate just how vast this number is. I can explicitly expand it out one level (10^(10^(10^(10^10000000000)))) but not two -- that would require about ten gigabytes of storage. Expanding it out to three levels would require more resources than exist in this universe. Four is right out.
But we are nowhere near done stretching the limits of our imagination. Up-arrow notation can be iterated: A↑↑B means A↑(A↑(A... <== B times. A↑↑↑B means A↑↑(A↑↑(A... <== B times, and so on. And this allows us to arrive -- but just barely -- at one of the crown jewels of big numbers, the famous Graham's number, which uses a number of up-arrows so big that it itself needs to be described using up-arrow notation.
Graham's number is already mind-bogglingly vast, but it's a piker compared to TREE(3). I'm not even going to try to explain that one. If you're interested there's a lot of material about it on the web. And even that is only just getting started. Everything I've described so far is still computable in principle (though not in practice). If we had a big enough computer and enough time then we could in principle actually write out all the digits of Graham's number or even TREE(3), or even TREE(TREE(3)) -- these patterns go on and on and on. But beyond these lie even bigger numbers which are uncomputable even in principle, though it's easy to show that they must exist. That will have to wait until I get around to discussing the halting problem. For now you will just have to take my word for it that there are a series of numbers called busy beavers which are provably larger than any number I've described so far, or even any number that can be constructed by any combination of techniques I've described so far. TREE(TREE(TREE.... iterated Graham's number of times? Busy beavers are bigger, but every one of them is nonetheless finite, exactly as far away from infinity as zero is.
And it gets even crazier than that. Let's suspend disbelief and imagine that we can actually "get to infinity" somehow. We're still not done, not by a long shot. It turns there are different kinds of infinity. The smallest one is the number of counting numbers or, equivalently, the number of ways you can arrange a finite set of symbols into finite-length strings. If you allow your strings to grow infinitely long then the number of such strings is strictly larger than the number of finite-length strings. And you can keep playing this game forever. The number of infinite two-dimensional arrays of symbols is strictly larger than the number of one-dimensional infinite strings of symbols, and so on and so one.
But wait, there's more! All this is just what we get when we treat numbers as measures of quantity. Remember the balls-and-buckets puzzle above, and how things got really strange when we allowed ourselves to paint numbers on the balls so we could distinguish one ball from another? It turns out that if we think about adding one as not just an indicator of quantity but of position that we can squeeze different kinds of infinities in between the ones we just constructed above. If we think of adding one as producing "the number after" rather than just the number that is "one more than" then we can introduce a number that is "the number after all the regular counting numbers". Mathematicians call that ω (the lower-case Greek letter omega). Then we can introduce the number after that (ω+1) and the number after that (ω+2) and so on until we get to ω+ω=2ω. And of course we can go on from there to 2ω+1, 2ω+2... 3ω, 4ω... ω*ω, ω^3, ω^4, ω^ω, ω↑ω, ω↑↑ω and so on until we get to ω with ω up-arrows, which mathematicians call ε0. And then the whole game begins anew with ε0+1, ε0+ω, ε0+ε0...
These kinds of infinities are called transfinite ordinals, and they have two interesting features. First, the "size" of each of these numbers, that is, the number of numbers between zero and any one of them, is exactly the same as the number of regular counting numbers. If we think about numbers as referring to position, then each ordinal is "bigger" than the one before, but if we think about them as referring to quantity then each one is exactly the same "size". And second, the game of inventing new ordinals does not have a regular pattern to it. It requires creativity. The bigger you get, the harder it becomes to define what "add one" means. It gets so hard, that mathematicians who figure out how to do it have the resulting numbers named after them.
The study of big numbers, both finite and infinite, is a deep, deep rabbit hole, one that ultimately leads to Turing machines and the theory of computation, which is the deepest rabbit hole of all. It's fascinating stuff, and well worth studying for its own sake. But is any of this relevant for science or is it just an intellectual curiosity?
Until and unless we develop a "theory of everything" that allows us to predict the result of any experiment, we cannot rule out the possibility that this theory will involve very large numbers (by which I mean numbers that require power towers or beyond to represent), and possibly even infinities. But so far this has not been the case. There are only two situations in our current best theories where infinities arise, and in both of those cases there is every reason to believe that this is an indication of a problem with the theory and not a reflection of anything real.
Just in case you were wondering, those two situations are singularities inside black holes and self-interactions in quantum mechanics. In the case of singularities, general relativity predicts their existence while at the same time predicting that they can never be observed because they always lie inside the event horizon of a black hole. In the case of self-interactions, it just turns out that you can basically just ignore them. If you simply throw out the infinities when they pop up, everything works out and the predictions made by the theory just turn out to be 100% accurate. No one knows why, but that's the way it is.
But there is another situation where a kind of infinity pops up which is not so easily dismissed.
Suppose you travel exactly one mile in a straight line, then turn exactly 90 degrees and travel another mile, again in a straight line. If you then want to return to your starting point by traveling in a straight line, how far would you have to go?
This innocuous-seeming question is the gateway to a whole series of major mathematical and philosophical problems. You can get the answer by applying Pythagoras's theorem (which, BTW, was almost certainly not discovered by Pythagoras, but that's another story): it's the square root of 2. The problem arises when you actually try to write this quantity out as a number. The answer obviously has to be somewhere between 1 and 2, so it's not a whole number. It also has to be somewhere between 14/10 and 15/10 because (14/10)^2 is 196/100, which is a little less than 2, and (15/10)^2 is 225/100, which is a little more than 2. We can keep narrowing down this range to smaller and smaller intervals, but we can never find a ratio of two integers whose square is exactly 2. The square root of 2 is an irrational number. If we wanted to write it out exactly, we would need an infinite number of digits. We haven't even gotten past the number 2 and simple multiplication, and yet somehow infinity has managed to rear its ugly head.
I opened this post with a joke, so I'll close with another: a farmer hired a physicist to design a machine to shear sheep. After a few weeks the physicist submitted his report. It began, "Assume a spherical sheep."
It's funny because, of course, sheep aren't spherical. But there is a metaphorical spherical sheep hiding in our problem statement. It's in the phrase, "exactly one mile" (and also "exactly 90 degrees"). In order for the mathematical irrationality of the square root of 2 to matter in the physical situation I have described it really is critical to travel exactly one mile. If you deviate from this in the slightest then the distance back to your starting point can become a rational number, representable numerically with no error in a finite number of symbols. For example, if either leg of your journey is exactly one part in 696 longer than a mile then the return trip will be exactly 985/696 miles.
In fact, for any number you care to name, no matter how small, I can give you a smaller number such that adding that much distance to one leg of the trip will make the return distance be a rational number. That means that if you odometer has any error at all, no matter how small, the return distance could be rational.
Of course, "could be" is not the same as "is". It's possible that the actual underlying physical (or even metaphysical) reality is truly continuous, and actually does require an infinite number of symbols to describe. But here is the important question: how could you ever possibly know? What experiment could possibly demonstrate this? In order to know whether physical reality is truly continuous you would need to somehow obtain an infinite amount of data! To be able to tell, for example, whether our three-dimensional space is truly continuous, you would need to be able to measure a length to infinite precision. How would you do that? Forget the problems with actually designing such a device and how you would get around (say) thermal fluctuations and the Heisenberg uncertainty principle. I grant you arbitrarily advanced technology and even new physics, and simply ask: what would the output of such a measuring device look like? It can't be a digital display; those can only output a finite number of possible outputs. So it would have to be some kind of analog output, like an old-school volt meter. But that doesn't help either, because to read an analog gauge you still have to look at it, and your eye doesn't have infinite resolution. And even if your eye had infinite resolution, you would still have to contend with the fact that the needle of the gauge is made of atoms which are subject to thermal fluctuations. And if you tried to solve that problem by cooling the gauge down to absolute zero, you still ultimately have to contend with the Heisenberg uncertainty principle. (Yes, I know I granted you new physics, but you still have to be consistent with quantum mechanics.)
The ultimate fact of the matter is that no matter how hard we try, no matter what technology we invent, no matter how many resources we deploy, we will only ever have a finite amount of data, and so we can always account for that data with finite explanations. We can imagine infinities. They might even pop up unbidden in our mathematical models. But when they do, that will almost certainly be an indication that we've done something wrong because we can know for sure that neither infinite quantities nor infinite precision can ever be necessary to explain our observations. In fact, we can calculate pretty easily the amount of data our universe can possibly contain, and it's a tiny number compared to what the human imagination is capable of conceiving.
Infinities are like wizards and unicorns. They're fun to think about, but they aren't real.
Powerset
ReplyDelete>we cannot rule out the possibility that this theory will involve very large numbers ..., and possibly even infinities. But so far this has not been the case.
Infinities are like wizards and unicorns. They're fun to think about, but they aren't real.
Don't be so sure - Hadron physics and transfinite set theory.
> It's in the phrase, "exactly one mile" (and also "exactly 90 degrees"). In order for the mathematical irrationality of the square root of 2 to matter in the physical situation I have described it really is critical to travel exactly one mile. If you deviate from this in the slightest then the distance back to your starting point can become a rational number, representable numerically with no error in a finite number of symbols.
Not sure what you're trying to say here. If the distance was 1.1 mile per leg, the the length of the hypotenuse is √2.42, which is an irrational number.
Furthermore, we can just define the actual distance you travelled as a new unit -- call it a 1 Garret. So whatever distance you travelled, we call it 1 Garret. Then the hypotenuse is √2 Garret.
In addition, I can likely draw a circle with more fidelity to the ideal than a triangle -- the circle has radius 1 Garret, with area π Garret.
>Of course, "could be" is not the same as "is". It's possible that the actual underlying physical (or even metaphysical) reality is truly continuous, and actually does require an infinite number of symbols to describe. But here is the important question: how could you ever possibly know? What experiment could possibly demonstrate this? In order to know whether physical reality is truly continuous you would need to somehow obtain an infinite amount of data!
Models of physical systems are abstractions of those systems, and the models are therefore simpler than the physical reality. All models are wrong, some models are useful (George Box).
I grant you arbitrarily advanced technology and even new physics, and simply ask: what would the output of such a measuring device look like?
Such a hypothetical measuring device could simply output √2.
>In fact, we can calculate pretty easily the amount of data our universe can possibly contain, and it's a tiny number compared to what the human imagination is capable of conceiving.
There's your proof that there are non-physical mental properties. Map every mental property onto a physical particle in the universe. Now take the powerset (the set of all subsets) of all the particles in the universe. The members of the powerset do not correspond to any particle in the universe, as you already mapped those to other mental properties.
George Box
ReplyDelete>What I'm really arguing against is your claim that the reals are necessary to model reality.
Take it up with Einstein. GR uses real numbers.
Now, we can always build other models that don't need real numbers. Most of the models we build only need floating point numbers. The detail you put in the model depends on what you need from the model.
>But how do you know that an abstraction is necessarily simpler than the thing that it abstracts?
In the case of models, because we choose to make them simpler.
Given that we build models out of mathematics, the resulting models are abstract because mathematics is abstract.
If you need to review what abstract objects are, try here: Abstract Objects
> GR uses real numbers.
ReplyDeleteYes, but as you yourself point out, we can always build other models that don't need real numbers
> > how do you know that an abstraction is necessarily simpler than the thing that it abstracts?
> In the case of models, because we choose to make them simpler.
Then it's not necessary.
> If you need to review what abstract objects are, try here: Abstract Objects
Ah. So, as I suspected, abstract objects are philosophical nonsense. Good to know.
ε
ReplyDelete>Yes, but as you yourself point out, we can always build other models that don't need real numbers
As I like to quote, "All models are wrong. Some models are useful." (George Box)
We rank models based on their error (call it ε). Models with smaller ε are considered better. However, a model with smaller ε may be much harder to understand, implement, and compute. So we consider how much error we can tolerate for our application, then choose the model that is suitable for our purpose.
GR is great for computing the orbit of Mercury. Yet, to compute the time it takes a pencil to fall from my desk to the ground, I'll use Newton's equations. Newton's equations are arguably more useful for applications on the surface of earth, but in some applications, the ε is too high, so more complicated models are needed.
Our best phyiscal models assume that nature contains real-valued quantities that vary continuously along a continuum. A few prominent examples are classical mechanics, electromagnetism (Maxwell's equations), GR, quantum field theory, thermodynamics, and fluid dynamics (Navier-Stokes equations).
Now most scientists are engineers will likely compute these models using floating point numbers instead of real numbers, thereby increasing the ε of the model. This is a practical choice, as 1) modern computers implement floating point in hardware, and 2) most real numbers in any continuous interval are Turing-uncomputable. An objects change in speed, or change in spatial location, may be transformations of one Turing-uncomputable real-valued quantity into another. A transformation of one Turing-uncomputable value into another Turing-uncomputable value is certainly a Turing-uncomputable operation. Yet since scientists and engineers want to compute an answer, they accept the extra ε and use floating point.
>Ah. So, as I suspected, abstract objects are philosophical nonsense. Good to know.
Now here is something science can explain. Students, when they don't understand concepts, often state that the subject "is stupid" and "I'll never need this" (this is often heard in math classes). They do this to protect their self-esteem and to shift the focus away from their own confusion.
Philosophy, by nature, deals with questions outside the realm of where there are settled, reliable, widely accepted answers. All philosophical questions are unsolved and/or controversial, by definition.
If a philosopher does definitively solve a question, it passes out of the realm of philosophy. All sciences originate in solved philosophies.
Yet Science cannot establish the methodology of how to do science, that is done by philosophy. Science cannot establish a metric of goodness for a scientific study, that is done by philosophy. Science cannot identify ANY objective to pursue in one's life, including for a scientific investigation. That is generally done at a much lower level of intuition, in pre-philosophic motivations. Philosophy provides the methodology to evaluate, question and possibly change one's intuited objectives.
They are different kinds of inquiry, and one leads to the other. There are no current questions that philosophy has a solution to that science lacks, but future sciences might yet be born from work philosophers are doing today.
> As I like to quote, "All models are wrong. Some models are useful." (George Box)
ReplyDeleteYou do indeed like to quote that, and it might even be true, but it misses a very important point: scientific models become less wrong over time.
> We rank models based on their error (call it ε). Models with smaller ε are considered better.
Not necessarily. It depends on your purpose. GR has a smaller error than Newton, but most of the time it doesn't matter and Newton works just fine, as you yourself point out:
> However, a model with smaller ε may be much harder to understand, implement, and compute. So we consider how much error we can tolerate for our application, then choose the model that is suitable for our purpose.
Yes, that is exactly right.
> Our best phyiscal models assume that nature contains real-valued quantities that vary continuously along a continuum.
Our best *current* models make this assumption. It does not follow that the continuum is *actually necessary* or that a better model can't be had by discharging this assumption.
BTW, even models that make the continuum assumption mostly do so as part of their formalism, not as part of their ontology. The only theory I know of that relies on non-quantized space as part of its ontology is the Bohm interpretation of QM, and that is one of the reasons I'm personally skeptical of it.
> > Ah. So, as I suspected, abstract objects are philosophical nonsense. Good to know.
> Now here is something science can explain. Students, when they don't understand concepts, often state that the subject "is stupid" and "I'll never need this" (this is often heard in math classes). They do this to protect their self-esteem and to shift the focus away from their own confusion.
That I'm just too stupid to understand "abstract objects" is a hypothesis that I am willing to seriously entertain. But here's your challenge: I have quite a bit of evidence that I'm not all that stupid, some of which involves people paying me quite a lot of money to do things that are not generally associated with the kind of abject stupidity that would be necessary to render me *uneducatable* about abstract objects. It seems much more likely to me that either your pedagogy is poor, or (far more likely) that abstract objects actually are exactly the philosophical bullshit that they appear to be.
BTW, one way you can persuade me otherwise is to point to a practical result that someone has achieved by taking "abstract objects" seriously. I predict you will be unable to do so.
> Philosophy, by nature, deals with questions outside the realm of where there are settled, reliable, widely accepted answers. All philosophical questions are unsolved and/or controversial, by definition.
The same is true of science. The difference is that science has an objective criterion for filtering out bad ideas, and philosophy doesn't, and so bad ideas have much more longevity in philosophy than they do in science.
This is not to say that philosophy does not produce good ideas on occasion -- it does. It's just a lot harder to separate the wheat from the chaff when you can't do an experiment.
What's a number?
ReplyDelete>BTW, one way you can persuade me otherwise is to point to a practical result that someone has achieved by taking "abstract objects" seriously. I predict you will be unable to do so.
On Tuesday, I needed to buy a lemon and some broccoli at the grocery store. I went into the produce department, and without reading any signs or labels, I was able to locate the lemons and choose one, then locate the brocolli and choose a couple of heads.
Yesterday I added a tip onto lunch tab using numbers.
We use abstract objects all the time.
Try living without abstract objects tomorrow. Just wake up and say, "Today, I'm going to ignore numbers." When paying for your coffee, just give the barista a wad of cash and hope they don't charge you an unspecified amount of money.
You're begging the question, assuming the thing you are trying to prove. You can't show that numbers are abstract concepts by assuming that numbers are abstract concepts.
ReplyDeleteEndquote
ReplyDelete@Ron:
>You're begging the question, assuming the thing you are trying to prove. You can't show that numbers are abstract concepts by assuming that numbers are abstract concepts.
Let me quote the person you like to quote most often:
The right question to ask is not, "Does X exist." The answer is always "yes". The right question is, "What is the nature of X's existence?" or "To which ontological category does X belong?"
(source)
Hence, I am not begging the question.
>> In addition, I can likely draw a circle with more fidelity to the ideal than a triangle
>I doubt that very much. How are you going to accomplish this miraculous feat? Are you going to use a compass? Paper? Ink? Those are all made of atoms, which, you will find, present major challenges for doing anything past a certain point of precision.
Are you asserting that one cannot build an ideal geometric circle out of matter, energy, or some combination of both? What if I got a lot of photons to orbit around a black hole? Electrons to orbit in a magnetic field? NIST made a sphere.
> Are you asserting that one cannot build an ideal geometric circle out of matter, energy, or some combination of both?
ReplyDeleteCorrect.
> What if I got a lot of photons to orbit around a black hole?
How exactly are you going to accomplish that? (You should probably read this.)
> Electrons to orbit in a magnetic field?
Do I really need to explain to you why that won't work? (Hint: what happens when you accelerate a charged particle?)
> NIST made a sphere.
First, a sphere is not a circle.
Second, that "sphere" deviates from sphericalness by many hundreds of atomic radii. That's an impressive engineering feat, but it's nowhere near the Platonic ideal.
Even if you could make a sphere that was spherical to within the radius of a silicon atom, you'd still be left with the problem of defining exactly where the outer surface of this "sphere" actually was because Heisenberg.
Agreement at last
ReplyDelete>> Are you asserting that one cannot build an ideal geometric circle out of matter, energy, or some combination of both?
@Ron:
>Correct.
Well, I agree with you.
That's nice. All is well and good. Hey, look at those flowers.
Uh oh, ah, your position creates a real problem for your naturalistic world view. I can think of a ideal geometric circle. Yet the brain is physical, so it can't be used to build an ideal geometric circle. Hence we must conclude that our thoughts, the mind, is immaterial.
> I can think of a ideal geometric circle.
DeleteI'll bet you can think of a lot of other things that don't exist: Wizards. Unicorns. Santa Claus. Just because you can imagine these things does not show that the mind is immaterial any more than the fact that I can put a copy of Lord of the Rings on my computer shows that computers are immaterial.
Mind's Eye
ReplyDelete>> I can think of a ideal geometric circle.
@Ron:
>I'll bet you can think of a lot of other things that don't exist: Wizards. Unicorns. Santa Claus. Just because you can imagine these things does not show that the mind is immaterial any more than the fact that I can put a copy of Lord of the Rings on my computer shows that computers are immaterial.
Tsk, tsk. How many times do I have to quote you to you before you'll stop saying things don't exist:
The right question to ask is not, "Does X exist." The answer is always "yes". The right question is, "What is the nature of X's existence?" or "To which ontological category does X belong?"
(source)
Also, your examples are off-point because they're not ideal geometric forms. If you want to talk about Santa Claus, that is a different discussion, and does not address the premises of my argument.
Yet let me help you, as you missed a more fruitful line of argumentation -- to deny that human minds can imagine an ideal geometric circle (people with Aphantasia certainly can't).
Mental images are often vague and indistinct. For example, we can readily form a mental image of 5 trees in a row. But to imagine
fifty (not forty-nine or fifty-one) trees in a row will be for most people an impossible task. To imagine five thousand (not more or less) trees in a row is an utter impossibility. Descartes gave an example of the impossibility of imagining the difference between a circle, a chiliagon (a polygon have 1,000 sides), a polygon with 1,002 sides, and a myriagon (10,000 sides). To the imagination, they all appear vaguely the same.
However, the concepts behind those images have a clarity and distinctness in contrast to the corresponding images. My idea of five thousand or five million trees is just as clear to my intellect as five or ten; I have no more difficulty in understanding the number 5,000,000 trees than I have in understanding the number 4,999,999 or 5,000,001. The intellect clearly and distinctly understands the differences between a circle, chiliagon, and myriagon.
Concepts have an abstract and universal nature, as contrasted with the concrete and particular character of images or mental images. A concept is equally representative of all objects of the same character. Therefore, if I see a circle drawn on a blackboard, the concept which I form of that geometrical figure will express not merely the individual circle before me, but all circles. The figure I see is of a definite size, and is in a particular place. But in my mind, by an act of abstraction, omits these individual characteristics, and forms the concept of a circle that conforms to Euclid’s definition. This concept is applicable to every circle that ever was drawn. When, however, I form the mental image of a circle, my mental image must necessarily represent a figure of particular dimensions. In other words the concept of the circle is universal: the mental image is singular. Similarly, if I form a concept of "man," my concept is applicable to all men. But a mental image of a man must represent him as possessed of a certain height, with certain features, with hair of a definite color, and so on.
There are some concepts that have no reasonable mental image that can be formed. Concepts like law, economics, knowledge, and certainty -- any arbitrary image can stand in for them (people often image the word, but that's just a contingent linguistic circumstance).
> How many times do I have to quote you to you before you'll stop saying things don't exist.
DeleteSorry, I thought it would be obvious from the context that "does not exist" here is just shorthand for "does not exist in the ontological category of physical objects".
> your examples are off-point because they're not ideal geometric forms
I don't see why that is relevant. Your argument was:
"I can think of a ideal geometric circle. Yet the brain is physical, so it can't be used to build an ideal geometric circle. Hence we must conclude that our thoughts, the mind, is immaterial."
You were using "ideal geometric circle" as an example of something non-physical that the mind can think of as evidence that the mind must be "immaterial" (whatever that means). But wizards and unicorns are non-physical too, so why are they not equally good examples of non-physical things that the mind can think of?
> deny that human minds can imagine an ideal geometric circle
But I don't deny that. Nor do I deny that human minds can imagine wizards, unicorns, or Santa Claus. It seems pretty obvious to me that human minds can imagine all those things and much more.
> I have no more difficulty in understanding the number 5,000,000 trees than I have in understanding the number 4,999,999 or 5,000,001.
But that's only because those numbers are miniscule relative to what can be imagined.
Which is bigger, TREE(TREE(3)) or G(G(G... (G(24)) ...)) where there are Graham's number of G's?
> There are some concepts that have no reasonable mental image that can be formed.
But that's not a deep philosophical observation, that is merely the observation that not all concepts map straightforwardly onto two- or three-dimensional space.
Mental vs. Physical
ReplyDeleteThe difference between mental images and concepts is one of indeterminacy: In the one case (illustrated by the examples of the trees and the chiliagon) there is a relationship of instantiation between the image and the universal named by the concept, but the instantiation is too imperfect for the image determinately to instantiate the number of trees conceived of (in the one example) or to instantiate being a chiliagon as opposed to being a myriagon or being a circle (in the other example). In the case of the other point (illustrated by examples like the concept law), the connection between the image and the concept is even looser, since a mental image of the word "law" does not resemble law, does not instantiate the universal law, and indeed does not of itself have any determinate significance at all. The corresponding concepts themselves, by contrast, are entirely determinate. When I am thinking about a chiliagon, there is no question that that is what I am thinking about, even if the mental image I entertain at the same time could in principle be taken for a mental image of a circle or a myriagon; and when I am thinking about law, there is no question that that is what I am thinking about, even if the visual or auditory image of the word "law" that I form while doing so could have been conventionally associated with some other concept.
Recall that I told you that we often know abstract objects better than concrete ones. As shown above, the mind can think with absolute clarity about abstract concepts -- they are entirely determinate. This is easy to demonstrate in the case of mathematical and logical examples (formal thinking), but it can be extended to conclude all thinking is determinate in this manner.
In contrast, the physical world is indeterminate. Note that in this context, "determinacy" (and its negation) are not related to scientific or engineering concepts of determinism. It has nothing to do with physical causality (such as a stop light cycles between Green-Yellow-Red in a deterministic fashion). Physical properties are "indeterminate" in the sense that they don't fix one particular meaning rather than another. Or as you would put it, " We can only ever be in possession of a finite amount of data, and that data will always be consistent with an infinite number of potential theories."
A very simple example can illustrate it. Consider the symbol: Δ It has a number of physical features, such as being black, having three straight sides, having a certain size, etc. Now, what exactly is it that Δ is a symbol of? Does it symbolize triangles in general? Black triangles in particular? A slice of pizza? A triangular UFO? A pyramid? A dunce cap? There’s nothing in the physical properties of Δ that entails any of these interpretations, or any other for that matter. The physical properties are indeterminate and conventional in the sense that they don’t fix one particular meaning rather than another. The same is true of any further symbol we might add to this one. For example, suppose the sequence T-R-I-A-N-G-L-E appeared under Δ. There is nothing in the physical properties of this sequence, any more than in Δ, that entails or fixes one particular meaning rather than another. Its physical properties are perfectly compatible with its signifying triangles themselves, or the word "triangle," or some weird guy who calls himself "Triangle," or any number of other things.
Hence we have a situation in which:
All formal thinking is determinate.
No physical process is determinate.
Hence we can conclude that: no formal thinking is a physical process. Thus the mind is immaterial.
Hope that helps.
> The difference between mental images and concepts is one of indeterminacy
DeleteNo, it isn't. It's the difference between a concept that maps straightforwardly onto three-dimensional space and one that doesn't.
> Consider the symbol: Δ It has a number of physical features, such as being black
Um, no. If I write Δ using blue ink it's still a Δ
Did you mean: consider a triangular shape written in black ink? Because that may or may not be a symbol. But *if* it is a symbol, then the color in which it is rendered is generally irrelevant. It is, of course, possible to use color as a distinguishing characteristic of a symbol, but it's a Really Bad Idea and I've never seen anyone actually do it.
> Hope that helps.
Only insofar as it illustrates that you don't understand what symbols are.
I guess I'll need to write a post about that.
Determinacy
ReplyDelete>You were using "ideal geometric circle" as an example of something non-physical that the mind can think of as evidence that the mind must be "immaterial" (whatever that means). But wizards and unicorns are non-physical too, so why are they not equally good examples of non-physical things that the mind can think of?
Not quite -- I was comparing mental images to physical objects. We cannot physically create perfect circles. Now, if you had a physical wizard or unicorn, could we create mental images with perfect fidelity to those physical objects?
>But that's only because those numbers are miniscule relative to what can be imagined.
>Which is bigger, TREE(TREE(3)) or G(G(G... (G(24)) ...)) where there are Graham's number of G's?
This is consistent with my point. Trying to create a mental image of TREE(TREE(3)) or G(G(G... (G(24)) ...)) is hopeless. However, to someone skilled in this area of mathematics, TREE(TREE(3)) or G(G(G... (G(24)) ...)) are concepts that have a clarity and distinctness to them.
>> There are some concepts that have no reasonable mental image that can be formed.
>But that's not a deep philosophical observation, that is merely the observation that not all concepts map straightforwardly onto two- or three-dimensional space.
For "not all concepts map straightforwardly onto two- or three-dimensional space", are you saying there is no map, or that there is a map, but it is complicated?
>> The difference between mental images and concepts is one of indeterminacy
>No, it isn't. It's the difference between a concept that maps straightforwardly onto three-dimensional space and one that doesn't.
Yes, it is. You might try reading it again. The point is being made between mental imagery vs. concepts, not one type of concept versus a different type of concept.
>> Consider the symbol: Δ It has a number of physical features, such as being black
>Um, no. If I write Δ using blue ink it's still a Δ
My apologies, it is rendered on my screen as black, but I have no way to know how your computer is rendering it. So assume it is black.
Yet you are missing the point badly. The physical properties are arbitrary and could be something else. As I said, "The physical properties are indeterminate and conventional in the sense that they don’t fix one particular meaning rather than another."
>I guess I'll need to write a post about that.
Make sure it's coherent with what your wrote before:
The marks you see on the screen get translated by your brain into an idea, but the marks and the idea are not the same thing. The idea is the thing that ends up in your brain after seeing the marks, which in this case your brain interprets as letters and words.
> if you had a physical wizard or unicorn
DeleteA world with physical wizards and unicorns would be so radically different from the world we actually inhabit that I have no idea what it would actually be like.
> For "not all concepts map straightforwardly onto two- or three-dimensional space", are you saying there is no map, or that there is a map, but it is complicated?
It's a continuum, not a dichotomy. Circles and polygons (and wizards, and unicorns) map onto space very straightforwardly. Numbers are a little less straightforward, and the bigger they get the less straightforward the mapping gets. The law doesn't map onto 3-D space well at all. You need a much higher-dimensional space for that, and most human's intuitions and ability to create mental images fails after just 3 or 4 dimensions.
> The physical properties are arbitrary
I believe the word you are looking for is "conventional" :-)
Yes, that's right. I don't see how you square that with:
> Consider the symbol: Δ It has a number of physical features, such as being black
Again, no. Being rendered in any particular color is *not* one of the conventional features of symbols. Ever hear of dark mode? Syntax highlighting?
Color Enhancement
ReplyDelete>A world with physical wizards and unicorns would be so radically different from the world we actually inhabit that I have no idea what it would actually be like.
Hence why your use of them was a poor choice of examples when discussing the comparison between mental imagery and physical objects.
>> The physical properties are arbitrary
>I believe the word you are looking for is "conventional" :-)
They are arbitrary and conventional.
>Yes, that's right. I don't see how you square that with:
>> Consider the symbol: Δ It has a number of physical features, such as being black
>Again, no. Being rendered in any particular color is *not* one of the conventional features of symbols. Ever hear of dark mode? Syntax highlighting?
Did you even read where I wrote "My apologies, it is rendered on my screen as black, but I have no way to know how your computer is rendering it."?
Yet you are still missing the point badly. The physical properties could be anything. The physical properties of the symbol do not, and cannot, fix its meaning.
Since you have been rude and insulting while persisted in this silly tangent, let me take a few additional moments out of my day to deliver a beat down on your claim that color is not one of the conventional features of symbols. Which is wrong, let's see -- we have traffic lights using red-yellow-green (stop-caution-go), and we have country flags that use color (compare Chad 🇹🇩 to Romania 🇷🇴). Then there's a Red Cross, or the Red Kettle of the Salvation Army. Or the black arm band and the gay pride flag.
Throughout history, humans have used the colors themselves as powerful symbols to convey meaning, emotion, and societal values. Different cultures have attached various symbolic meanings to colors based on their natural surroundings, beliefs, and traditions. For instance, red is often associated with energy, passion, or danger, while blue can symbolize calm, trust, and stability. In religious contexts, colors such as white often represent purity and holiness, whereas black may symbolize mourning or the unknown. Colors have also been used in national flags, emblems, and political movements to represent unity, resistance, or identity. This symbolic use of color is deeply ingrained in art, clothing, and rituals, showing how color communicates beyond words across time and cultures.
No maps
ReplyDelete>> For "not all concepts map straightforwardly onto two- or three-dimensional space", are you saying there is no map, or that there is a map, but it is complicated?
>It's a continuum, not a dichotomy. Circles and polygons (and wizards, and unicorns) map onto space very straightforwardly. Numbers are a little less straightforward, and the bigger they get the less straightforward the mapping gets. The law doesn't map onto 3-D space well at all. You need a much higher-dimensional space for that, and most human's intuitions and ability to create mental images fails after just 3 or 4 dimensions.
Your argument hinges on the idea that abstract objects are on a continuum of how easily they can be mapped onto mental images, with simpler objects like circles and polygons mapping more straightforwardly, while more abstract concepts, like numbers or the law, require higher-dimensional spaces. However, just because an abstract concept can be mapped onto a mental image or visualized doesn't mean that the concept itself inherently is an image or reducible to imagery.
Abstract objects, such as numbers or legal principles, don't necessarily need to have a spatial or dimensional structure. Their nature is fundamentally different from that of objects with physical form. Numbers, for instance, can be mentally represented in many ways (as quantities, symbols, or visual arrangements), but this is a matter of how we think about them, not a reflection of their true nature. Similarly, while the law might require a higher-dimensional or more complex conceptual framework to fully grasp, this doesn't imply that it has an inherent 'image' or spatial structure.
So, while mental imagery might help us understand or conceptualize abstract objects, it doesn't follow that all abstract objects are or must have mental images, or that the complexity of an idea correlates with its dimensional mappability.
Or, as one might say, "There are some concepts that have no reasonable mental image that can be formed."
ReplyDelete>> A world with physical wizards and unicorns would be so radically different from the world we actually inhabit that I have no idea what it would actually be like.
> Hence why your use of them was a poor choice of examples when discussing the comparison between mental imagery and physical objects.
Um, no. I would say exactly the same thing about an ideal geometric circle. A world where an ideal geometric circle actually existed as a physical object would also be so radically different from the world we actually inhabit, where all physical things are made of atoms, that I have no idea what that world would actually be like either.
That I can imagine ideal circles and wizards and unicorns and all kinds of other things that do not and cannot exist as physical objects in this universe in no way proves that my mind is not physical.
>> Again, no. Being rendered in any particular color is *not* one of the conventional features of symbols. Ever hear of dark mode? Syntax highlighting?
> Did you even read where I wrote "My apologies, it is rendered on my screen as black, but I have no way to know how your computer is rendering it."?
Yes. I don't see why that's relevant.
> Yet you are still missing the point badly. The physical properties could be anything. The physical properties of the symbol do not, and cannot, fix its meaning.
I don't dispute that.
> we have traffic lights using red-yellow-green (stop-caution-go), and we have country flags that use color (compare Chad 🇹🇩 to Romania 🇷🇴). Then there's a Red Cross, or the Red Kettle of the Salvation Army. Or the black arm band and the gay pride flag.
Yes, some symbols have color as a significant feature. But the example you originally cited, Δ, is not one of them.
> There are some concepts that have no reasonable mental image that can be formed.
That depends entirely on how you define "reasonable mental image".
determinate
ReplyDelete@Ron:
>That I can imagine ideal circles and wizards and unicorns and all kinds of other things that do not and cannot exist as physical objects in this universe in no way proves that my mind is not physical.
The difference between ideal circles (and other geometric and mathematical objects) have a determinate meaning in the mind. Wizards and unicorns are works of fiction and have indeterminate meaning in the mind. No physical instantiation of any of them is determinate.
Therefore:
1) All formal thinking is determinate.
2) No physical process is determinate.
3) So, no formal thinking is a physical process.
> No physical process is determinate
DeleteI'm sorry, but I can't wring any coherent meaning out of those words. All physical processes, with the exception of quantum measurement, are *deterministic*, so I have no idea what "determinate" could possibly mean that would make this statement true. And I will point out again that computers can do formal symbol manipulation. If you think computers are not an embodiment of a physical process, well, we can't even agree to disagree on that. You're just totally detached from reality.
The determinacy and indeterminacy in question have nothing at all to do with causal determinism, quantum mechanics, free will, etc. They have instead to do with the semantic determinacy and indeterminacy in view in some famous twentieth-century philosophical thought experiments like W. V. Quine’s "gavagai" example from Word and Object and Saul Kripke’s "quus" example from Wittgenstein on Rules and Private Language.
ReplyDeleteSomething is "determinate" in the sense in question here if there is an objective fact of the matter about whether it has one rather than another of a possible range of meanings – that is to say, if it has a meaning or semantic content that is exact, precise, or unambiguous. It is "indeterminate" if it does not, that is to say, if there is no objective fact of the matter about which of the alternative possible meanings or contents it possesses.
> Something is "determinate" in the sense in question here if there is an objective fact of the matter about whether it has one rather than another of a possible range of meanings
DeleteAh. In that case, nothing is determinate. You can never be 100% certain that what you mean by "circle" is the same thing that I mean by "circle". Indeed, the kerfuffle over Euclid's fifth postulate, which took 2000 years to resolve, stemmed from the fact that everyone thought that there was an objective fact of the matter regarding the meaning of "parallel lines".
Coherent
ReplyDelete>Ah. In that case, nothing is determinate.
The claim is that all formal thinking is determinate, such as adding, squaring, inferring via modus ponens, syllogistic reasoning, and the like.
To deny that our thoughts are ever really determinately of any of the forms just cited is expensive for you. You will have to maintain that we only ever approximate adding, squaring, inferring via modus ponens, etc. Yet it is incoherent to assert this.
First, it is just prima facie wildly implausible to suggest that whenever we have taken ourselves to add, square, draw a modus
ponens inference, etc., we have been mistaken and have not really done so at all. You might dig in your heels and insist we have to bite
this particular bullet, but this would be plausible only if the considerations in favor of his bizarre position were more obviously correct than is our common sense conviction that we do indeed often add, square, apply modus ponens, etc. And why should we believe that?
Second, it isn’t just common sense that your view conflicts with. The claim that we never really add, apply modus ponens, etc. is hard to square with the existence of the vast body of knowledge that comprises the disciplines of mathematics and logic. Nor is it just that mathematics and logic constitute genuine bodies of knowledge in their own right; they are also presupposed by the natural sciences. If natural science presupposes mathematics and logic, and mathematics and logic presuppose that we do indeed have determinate thought processes, it is hard to see you consistently draw the conclusion our thoughts are indeterminate.
A third and related problem is that if we never really apply modus ponens or any other valid argument form, but at best only approximate them, then none of our arguments is ever really valid. That includes your arguments. Hence the view is self-defeating. Even if it were true, we could never be rationally justified in believing that it is true, because we couldn’t be rationally justified in believing anything.
Fourth, the claim that we never really add, square, apply modus ponens, etc., is self-defeating in an even more direct and fatal way. For coherently to deny that we ever really do these things presupposes that we have a grasp of what it would be to do them. And that means having thoughts of a form as determinate as those the critic says we do not have. In particular, to deny that we ever really add requires that we determinately grasp what it is to add and then go on to deny that we really ever do it; to deny that we ever really apply modus ponens requires that we determinately grasp what it is to reason via modus ponens and then go on to deny that we ever really do that; and so forth. Yet the whole point of denying that we ever really add, apply modus ponens, etc., was to avoid having to admit that we at least sometimes have determinate thought processes. So, to deny that we have them presupposes that we have them. It cannot coherently be done.
>You can never be 100% certain that what you mean by "circle" is the same thing that I mean by "circle".
This is just the traditional "problem of other minds." It's hard to see why this is a special problem for the argument I presented that minds are immaterial. It can be and often is presented as a problem whatever one's view about the metaphysics of mind, whether dualist or materialist.
Yet if you take the position that minds are material and thoughts are indeterminate, then there is no fact of the matter about what a person means, as there is no fact of the matter at all.
Delete> The claim is that all formal thinking is determinate, such as adding, squaring, inferring via modus ponens, syllogistic reasoning, and the like.
This is a category error with respect to your earlier definition:
> Something is "determinate" in the sense in question here if there is an objective fact of the matter about whether it has one rather than another of a possible range of meanings
The whole point of "formal thinking" (that is a bit of an oxymoron, but I'll try to read it as charitably as I can) is that it is purely mechanical, purely syntactic. The process of formal reasoning is independent of the meaning of the symbols being manipulated. That's the *whole point* of formal reasoning.
> To deny that our thoughts are ever really determinately of any of the forms just cited
That didn't parse. And you've also introduced a new term, "forms", which you haven't defined.
> The claim that we never really add
I didn't make that claim, and I never would because I have no idea what "really add" could possibly mean. All I know is that when you manipulate certain symbols according to certain rules the results correspond reliably to things I observe.
> if you take the position that minds are material and thoughts are indeterminate, then there is no fact of the matter about what a person means, as there is no fact of the matter at all.
Yes, well, there being no fact-of-the-matter is a possibility that always needs to be considered. Facts are a hypothesis to explain the observation that people agree on things. They are a very plausible hypothesis, so plausible that it is a real mind-bender to conceive of any alternatives. But, as you will see when I finally get around to talking about QM, not only are there alternatives, one of those alternatives actually turns out to be true i.e. a better model of observation than the fact-of-the-matter hypothesis.
Meaning is important
ReplyDelete@Ron:
>The whole point of "formal thinking" (that is a bit of an oxymoron, but I'll try to read it as charitably as I can) is that it is purely mechanical, purely syntactic. The process of formal reasoning is independent of the meaning of the symbols being manipulated. That's the *whole point* of formal reasoning.
First, meaning is derived from use, and even in formal systems, symbols have meaning through the rules governing their use. A formal system is not just mechanical symbol manipulation; its rules reflect a semantic structure that corresponds to meaningful relations. In practice, formal reasoning is deployed to reach conclusions that are not just symbolically correct but also meaningful within the system. If we remove meaning entirely, the system becomes a mere game with symbols, which contradicts its role in rational discourse.
Second, any system of reasoning, even a formal one, requires interpretation. Formal reasoning does not operate in a vacuum; it is always interpreted by human minds that bring meaning and context to the symbols being manipulated. Without such interpretation, the formal system becomes unintelligible. The symbols in a formal system are not self-evident in their meaning; they rely on humans to assign meaning and to understand the conclusions that emerge from syntactic manipulations.
Third, John Searle’s famous "Chinese Room" thought experiment challenges the idea that purely syntactic manipulation constitutes understanding or meaning. In the Chinese Room, a person manipulates symbols according to rules but has no understanding of the meaning behind them. Searle argues that this does not constitute true comprehension, just as manipulating formal symbols syntactically in reasoning does not equate to genuine understanding. Thus, purely mechanical symbol manipulation cannot be the "whole point" of formal reasoning, as it fails to account for the crucial aspect of meaning, which is essential for true reasoning.
Formal systems are not isolated from semantic content; their rules, conclusions, and uses are all intertwined with meaningful human contexts. Thus, the "whole point" of formal reasoning cannot be reduced to syntax alone, as meaning is indispensable to its proper functioning.
>That didn't parse. And you've also introduced a new term, "forms", which you haven't defined.
No special definition for "forms," just the plural of the normal English word "form".
But I can rewrite it for you:
To deny that our thoughts are ever really determinate when adding, squaring, inferring via modus ponens, or syllogistic reasoning is expensive for you. You will have to maintain that we only ever approximate adding, squaring, inferring via modus ponens, etc. Yet it is incoherent to assert this.
> Facts are a hypothesis to explain the observation that people agree on things. They are a very plausible hypothesis, so plausible that it is a real mind-bender to conceive of any alternatives.
How do you square that with Ron's Theory of Truth? Your theory of truth is a correspondence theory of truth, which defines a "fact" as a true state of affairs or a proposition that corresponds to reality. It is something that actually exists or happens, independent of belief or perception. For instance, "The sky is blue" is a fact if, in reality, the sky is blue.
Now if you want to redefine facts as "a hypothesis to explain the observation that people agree on things," you need to modify your theory of truth. Logical positivists, for example, would define facts as verifiable observations about the world. Facts, in this sense, are limited to what can be empirically verified.
Delete> In practice, formal reasoning is deployed to reach conclusions that are not just symbolically correct but also meaningful within the system. If we remove meaning entirely, the system becomes a mere game with symbols, which contradicts its role in rational discourse.
You have the causality backwards. Formal reasoning is just a game with symbols. As I've said many times before, but which you seem not to have taken on board yet, it *turns out* that *some* rules for manipulating symbols produce behavior that corresponds to things we observe in reality. We use those rules and not others because of that correspondence, which we call "meaning". But that doesn't change the fact that the actual process of manipulating the symbols according to the rules, and even *choosing* the rules that produce corresponding behavior, has nothing to do with meaning in and of itself.
> Second, any system of reasoning, even a formal one, requires interpretation.
Only because that is baked into the typical definition of the word "reasoning", which is typically taken to require "reasoning about something". If a formal system doesn't have a model, it's not considered "reasoning", it's considered "meaningless symbol manipulation." But that's kind of like defining "AI" as "any problem that has not yet been solved by AI researchers" and then concluding that AI researchers never solve any problems because all AI problems are unsolved.
> Third, John Searle’s famous "Chinese Room" thought experiment challenges the idea that purely syntactic manipulation constitutes understanding or meaning.
Indeed it does, but it does this by begging the question. It just *assumes*, without any actual basis, that because the symbol manipulation is meaningless to the person in the room manipulating the symbols (because that person doesn't speak Chinese) that it must be meaningless in an absolute sense. The *room* could understand Chinese even if the person inside the room doesn't.
(I don't think the room understands Chinese either, but Searle's argument doesn't prove it. BTW, the Chinese Room is no longer a hypothetical. LLMs are literal Chinese Rooms, at least if you train them on Chinese text.)
> To deny that our thoughts are ever really determinate when adding, squaring, inferring via modus ponens, or syllogistic reasoning is expensive for you. You will have to maintain that we only ever approximate adding, squaring, inferring via modus ponens, etc. Yet it is incoherent to assert this.
DeleteNo. Just because multiple meanings are possible doesn't mean that those possible meanings are *approximations*. The Peano axioms do not have a unique model. They have infinitely many models. None of them are "approximations". All of them are precise models of the Peano axioms.
>> Facts are a hypothesis to explain the observation that people agree on things. They are a very plausible hypothesis, so plausible that it is a real mind-bender to conceive of any alternatives.
> How do you square that with Ron's Theory of Truth?
Let me try to be a little more precise: the idea that there is such a thing as a "fact" is a hypothesis, because the existence of objective reality itself is a hypothesis.
> "The sky is blue" is a fact if, in reality, the sky is blue.
Yes, that's right. But notice that this presumes that there is such a thing as "the sky" i.e. that the phrase "the sky" has an actual referent in objective reality. (Languages are theories!) But if you try to pin down what that referent is you will find yourself in trouble in short order. For example, if you ask two people in two different places on earth to point to "the sky" they will point in different directions, and the directions they point in will not intersect. So where exactly is this alleged "sky" thing?
So one possible explanation for the "fact" that everyone *agrees* that the sky is blue is that there is actually such a thing as the sky in objective reality, and that it is actually blue. But that's not the only possible explanation. In fact, it's not even the correct explanation, because there actually isn't such a thing as "the sky". What there actually is is a planet with an atmosphere which contains a lot of nitrogen, which scatters blue light. That makes it *look* (to our eyes) as if there is such a thing as a blue sky even though there actually isn't.
Built In
ReplyDelete@Ron:
>You have the causality backwards. Formal reasoning is just a game with symbols. As I've said many times before, but which you seem not to have taken on board yet, it *turns out* that *some* rules for manipulating symbols produce behavior that corresponds to things we observe in reality. We use those rules and not others because of that correspondence, which we call "meaning".. . .
Formal systems are designed to model aspects of reality through abstraction. The symbols gain meaning from their use within a set of rules that reflect real-world relationships. Meaning isn’t just imposed after the fact; it’s built into the system through the rules that govern symbol manipulation. These rules aren’t arbitrary—they’re chosen because they correspond to real-world reasoning and relationships. The power of formal systems lies in this link between syntax (rules) and semantics (meaning). The correspondence with reality isn’t accidental; it's the result of deliberate design to model and reflect reality through structured reasoning.
>Only because that is baked into the typical definition of the word "reasoning", . . .
In your AI analogy, you highlight how definitions can shift depending on practical progress. But when it comes to reasoning, the point isn't about the evolving nature of definitions -- it's about the intrinsic requirement that symbols must be interpreted for reasoning to take place. Even if a formal system doesn't refer to an external model, it still requires an internal set of rules or semantics to function meaningfully. Without that, it's not just that the symbols are meaningless; there's no reasoning happening in any meaningful sense.
>Indeed it does, but it does this by begging the question. It just *assumes*, without any actual basis, that because the symbol manipulation is meaningless to the person in the room manipulating the symbols (because that person doesn't speak Chinese) that it must be meaningless in an absolute sense. The *room* could understand Chinese even if the person inside the room doesn't.
>I don't think the room understands Chinese either, but Searle's argument doesn't prove it.
Searle addresses your objection on page 3 of the Chinese Room paper I linked to:
": let the individual internalize all
of these elements of the system. He memorizes the rules in the ledger and the data
banks of Chinese symbols, and he does all the calculations in his head. The
individual then incorporates the entire system. There isn’t anything at all to the
system that he does not encompass. We can even get rid of the room and suppose he
works outdoors. All the same, he understands nothing of the Chinese, and a fortiori
neither does the system, because there isn’t anything in the system that isn’t in him.
If he doesn’t understand, then there is no way the system could understand because
the system is just a part of him."
"Actually I feel somewhat embarrassed to give even this answer to the systems
theory because the theory seems to me so implausible to start with. The idea is that
while a person doesn’t understand Chinese, somehow the conjunction of that person
and bits of paper might understand Chinese. It is not easy for me to imagine how
someone who was not in the grip of an ideology would find the idea at all plausible."
> These rules aren’t arbitrary—they’re chosen because they correspond to real-world reasoning and relationships.
DeleteYes, that's true. But it doesn't change the fact that the symbol manipulation is still purely mechanical.
> symbols must be interpreted for reasoning to take place
The depends entirely on how you define "reasoning", just as the question of whether or not AI problems ever get solved depends entirely on how you define "AI".
> requires an internal set of rules or semantics
Here you are confused. The only thing that is *required* of a formal system is the formal rules for symbol manipulation. Those are syntactic rules. Semantics are not required. Semantics are what make formal systems useful and interesting, but it is not what makes them formal.
> let the individual internalize all of these elements of the system
That's pretty unrealistic, but it's neither here nor there because I've already conceded that the room doesn't understand Chinese either. And the reason for this is that neither the room nor the person who has memorized all the rules has the other necessary element for understanding, which is a knowledge of the correlations between the symbols and reality.
What makes the Chinese Room an unrealistic model is that the *input* to the Room is exclusively Chinese text. If you raise a human where the only thing they ever experience is seeing Chinese text, they will not learn to understand Chinese.
Contrails
ReplyDelete@Ron:
>No. Just because multiple meanings are possible doesn't mean that those possible meanings are *approximations*. The Peano axioms do not have a unique model. They have infinitely many models. None of them are "approximations". All of them are precise models of the Peano axioms.
You're confusing two distinct notions of indeterminacy: the non-uniqueness of formal models (such as the multiple models of the Peano axioms) and your claim that human thinking is fundamentally indeterminate. The fact that the Peano axioms have multiple models does not entail that human thought is indeterminate. Rather, it shows that certain formal systems can be interpreted in multiple ways, but these interpretations are still precise and determinate within their respective models.
The key issue being discussed isn't about whether formal systems can have multiple models but whether human thought processes, when engaged in formal reasoning (like adding or using modus ponens), can be determinate. The multiplicity of models in mathematics or logic does not imply that the mental act of reasoning itself is indeterminate. The reasoning within any of those models is still precise and follows clear, determinate rules.
Which is a problem for you, if you want to claim that human thinking is fundamentally indeterminate.
>Let me try to be a little more precise: the idea that there is such a thing as a "fact" is a hypothesis, because the existence of objective reality itself is a hypothesis.
>But notice that this presumes that there is such a thing as "the sky" i.e. that the phrase "the sky" has an actual referent in objective reality. (Languages are theories!) But if you try to pin down what that referent is you will find yourself in trouble in short order.
The notion of "facts" doesn't necessarily rely on language perfectly capturing every detail of objective reality. Rather, facts are statements that correspond to aspects of reality in a useful, pragmatic way, even if the terms we use (like "sky") are simplifications.
Yes, "the sky" as we commonly refer to it isn’t a concrete object like a tree or a chair, but that doesn't mean it’s not based on something real. The fact that two people in different locations point in different directions when they look at the sky doesn’t undermine the existence of a shared reality—they are still observing the same phenomenon (light scattering in the atmosphere) from different perspectives.
While scientific explanations, like nitrogen scattering blue light, offer more precision, they don’t negate the existence of a "sky" in ordinary discourse. Scientific descriptions and everyday language both refer to the same underlying reality, but they do so with different levels of detail.
So, the hypothesis that objective reality exists (e.g., that there is a planet with an atmosphere) allows for facts to be true or false based on how well they correspond to that reality, regardless of how we theorize about or name the components of that reality.
>In fact, it's not even the correct explanation, because there actually isn't such a thing as "the sky". What there actually is is a planet with an atmosphere which contains a lot of nitrogen, which scatters blue light. That makes it *look* (to our eyes) as if there is such a thing as a blue sky even though there actually isn't.
Ha, this is the New Atheist trick: reduce a united being to its components in order to remove meaning from it.
> The reasoning within any of those models is still precise and follows clear, determinate rules.
DeleteYou are still confusing syntax and semantics. When you talk about "clear, determinate rules" you are talking about syntax, not semantics.
Here is another example of your confusion:
> The reasoning within any of those models is still precise
There is no such thing as "reasoning within a model" in the context of formal systems. All "reasoning" within a formal system is syntactic. That is the defining characteristic of a formal system. Models and semantics are just something tacked on after the fact to give the results utility.
> the hypothesis that objective reality exists ... allows for facts to be true or false based on how well they correspond to that reality, regardless of how we theorize about or name the components of that reality.
Almost. It is not the *hypothesis* that allows this, it is the *actual existence* of objective reality -- if indeed it does exist -- that allows this.
> New Atheist trick:
You ain't seen nuthin'.
Coherence
ReplyDelete@Ron:
> But it doesn't change the fact that the symbol manipulation is still purely mechanical.
Those mechanical manipulations of symbols according to syntactic rules can be done with fidelity with human minds, yes?
>The only thing that is *required* of a formal system is the formal rules for symbol manipulation. Those are syntactic rules. Semantics are not required.
Syntax and semantics are interdependent. Syntax is designed to allow and facilitate semantic meaning. In both natural and formal languages, the purpose of syntax is to create a structured system that enables the expression of coherent and meaningful ideas.
>There is no such thing as "reasoning within a model" in the context of formal systems. All "reasoning" within a formal system is syntactic. That is the defining characteristic of a formal system. Models and semantics are just something tacked on after the fact to give the results utility.
When we reason within a formal system, even if we are merely manipulating symbols, we usually do so with the intention of representing or solving something that has meaning to us. For example, a mathematician proving a theorem in first-order logic is not just manipulating symbols for no reason but is aiming to discover something true about mathematical objects. Their reasoning is directed toward the truth or structure those symbols represent in the model or real world.
While formal systems often focus on syntactic manipulation of symbols, reasoning within a model refers to interpreting the formal symbols in a specific structure (model) that gives them meaning. The semantics of a formal system are crucial for understanding what the formal system is "about" and how the syntactic derivations relate to real-world concepts. Models are not just an afterthought, but a fundamental aspect of how we relate formal systems to the real world or abstract structures. Formal systems gain utility precisely because they allow for this semantic interpretation.
>What makes the Chinese Room an unrealistic model is that the *input* to the Room is exclusively Chinese text.
The point of Searle's argument is not merely that the person in the room doesn't understand Chinese, but that no part of the system — neither the person, the rules they follow, nor the room as a whole -- truly understands Chinese in the way a native speaker does. Searle's thought experiment is designed to show that even if a system can perfectly manipulate symbols syntactically (like a computer following a program), this does not guarantee that the system actually understands the meaning of those symbols (i.e., has semantic comprehension). The key distinction is between manipulating symbols according to formal rules (syntax) and genuinely grasping the meaning behind those symbols (semantics).
>Almost. It is not the *hypothesis* that allows this, it is the *actual existence* of objective reality -- if indeed it does exist -- that allows this.
If you subscribe to the coherence theory of truth, it's the hypothesis.
> Those mechanical manipulations of symbols according to syntactic rules can be done with fidelity with human minds, yes?
DeleteOf course, for some level of "fidelity". (I presume you meant "by" human minds?) But it requires practice. There's a reason people used to make careers out of doing symbol manipulation. Mechanical computers are much better at it than even the most skilled humans.
> Formal systems gain utility precisely because they allow for this semantic interpretation.
Yes. Another way to say this, as I have often repeated, is that some formal systems have utility because their behavior correlates with things we observe. I fail to see your point here.
> The point of Searle's argument is not merely that the person in the room doesn't understand Chinese, but that no part of the system — neither the person, the rules they follow, nor the room as a whole -- truly understands Chinese in the way a native speaker does.
Yes, I understand that. The problem is that Searle has reached the correct conclusion but for the wrong reason. The problem is not the symbol manipulation, the problem is that the symbol-manipulation isn't correlated with anything outside the Room. The Room is disembodied. It is deaf and blind. It can't taste anything, smell anything, touch anything. *That* is the reason the Room doesn't understand anything, not because it's doing symbol manipulation.
> If you subscribe to the coherence theory of truth, it's the hypothesis.
Well, then obviously I don't subscribe to this theory of truth.
Coherence is an observed property of objective reality, one of the reasons to believe that objective reality actually exists. It's not a defining characteristic of truth. Truth could be incoherent, it just turns out not to be (as far as we can tell so far).
Homunculus
ReplyDelete@Ron:
>Of course, for some level of "fidelity". (I presume you meant "by" human minds?)
Which brings us back to:
1) All formal thinking is determinate.
2) No physical process is determinate.
3) So, no formal thinking is a physical process.
> The problem is not the symbol manipulation, the problem is that the symbol-manipulation isn't correlated with anything outside the Room. The Room is disembodied. It is deaf and blind. It can't taste anything, smell anything, touch anything. *That* is the reason the Room doesn't understand anything, not because it's doing symbol manipulation.
Searle rebuts this argument on page 4 of the paper I linked:
"Suppose we wrote a different kind of program …
Suppose we put a computer inside a robot, and this computer would not just take in
formal symbols as input and give out formal symbols as output, but rather would
actually operate the robot in such a way that the robot does something very much
like perceiving, walking, moving about, hammering nails, eating drinking --
anything you like. The robot would, for example have a television camera attached
to it that enabled it to 'see,' it would have arms and legs that enabled it to 'act,' and
all of this would be controlled by its computer 'brain.' Such a robot would … have
genuine understanding and other mental states."
"The first thing to notice about the robot reply is that it tacitly concedes that
cognition is not solely a matter of formal symbol manipulation, since this reply adds
a set of causal relation with the outside world. But the answer to the robot reply is
that the addition of such "perceptual" and "motor" capacities adds nothing by way
of understanding, in particular, or intentionality, in general, to [the] original
program. To see this, notice that the same thought experiment applies to the robot
case. Suppose that instead of the computer inside the robot, you put me inside the
room and, as in the original Chinese case, you give me more Chinese symbols with
more instructions in English for matching Chinese symbols to Chinese symbols and
feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the
Chinese symbols that come to me come from a television camera attached to the
robot and other Chinese symbols that I am giving out serve to make the motors
inside the robot move the robot’s legs or arms. It is important to emphasize that all I
am doing is manipulating formal symbols: I know none of these other facts. I am
receiving "information" from the robot’s "perceptual" apparatus, and I am giving
out "instructions" to its motor apparatus without knowing either of these facts. I am
the robot’s homunculus, but unlike the traditional homunculus, I don’t know what’s
going on. I don’t understand anything except the rules for symbol manipulation.
Now in this case I want to say that the robot has no intentional states at all; it is
simply moving about as a result of its electrical wiring and its program. And
furthermore, by instantiating the program I have no intentional states of the relevant
type. All I do is follow formal instructions about manipulating formal symbols."
> The first thing to notice about the robot reply is that it tacitly concedes that
Deletecognition is not solely a matter of formal symbol manipulation, since this reply adds
a set of causal relation with the outside world.
I've never claimed otherwise.
> I want to say that the robot has no intentional states at all.
Then how are you going to persuade me that *you* have "intentional states" (whatever that might actually mean)? On what possible basis can I conclude that *you* have "intentional states" other than your I/O behavior? And if I can conclude that *you* have "intentional states" based on your I/O behavior, why should I deny that judgement to another entity exhibiting the same I/O behavior just because it happens not to be made of meat?
0 for 3
ReplyDelete@Ron:
>Then how are you going to persuade me that *you* have "intentional states" (whatever that might actually mean)?
Ha! Now you've hit upon the "other minds reply" to Searle. Searle also addressed this, and his reply to this is very short:
"The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In 'cognitive sciences' one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects."
Let's check your scorecard against Searle:
1. The Systems Reply ✘
2. The Robot Reply ✘
3. The Other Minds Reply ✘
> In 'cognitive sciences' one presupposes the reality and knowability of the mental
DeleteThis is called begging the question. It's one of the reasons no one takes Searle seriously (or at least why no one should).
> in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects."
Not only is it not necessary to presuppose the reality of physical objects, it is explicitly denied by our best current theories. This is yet another reason Searle is unworthy of serious consideration.
Model ≠ Ontology
Delete@Ron:
>This is called begging the question.
No, that is not begging the question. He asserts that in any study of cognition, one assumes that mental states are real and knowable. While this presupposition could be questioned, it's not circular reasoning in itself; rather, it's a methodological assumption used to proceed with research.
>Not only is it not necessary to presuppose the reality of physical objects, it is explicitly denied by our best current theories.
Ha, this is the fallacy of reification.
>It's one of the reasons no one takes Searle seriously (or at least why no one should).
>This is yet another reason Searle is unworthy of serious consideration.
How embarrassing for you, then, that his arguments are proving quite challenging for you to counter effectively.
> He asserts that in any study of cognition, one assumes that mental states are real and knowable.
DeleteNo, that is not what he asserts. What he asserts is:
"one presupposes the reality and knowability of the mental"
You added the word "states" which changes the meaning in a significant way by making it more precise. Mental states are indeed real and knowable, and by coming to know them one can see that the Chinese Room argument has no merit. But "the mental" is much vaguer, and so one can -- and Searle does -- tacitly define it in whatever way is necessary in order to support one's argument. Searle's argument is essentially nothing more than, "Well, it's just *obvious* that the Room does not understand Chinese, therefore it cannot possibly understand Chinese." A more perfect example of question-begging is hard to imagine.
BTW, this is a common tactic used by charlatans, bobbing and weaving between the vague and the precise to make it look like they are advancing a coherent argument when in fact they are talking nonsense, being precise right up to the point where precision would refute their argument, and then retreating to vaguary. (You have been doing the same thing with the word "determinate", applying it variously to the meanings of words, the semantics of formal systems, and God only know what else, I haven't been keeping an inventory.)
> this is the fallacy of reification.
Clearly you did not read past the title of the paper I linked to.
> his arguments are proving quite challenging for you to counter effectively
No, Searle's arguments are easy to counter. It is *your* arguments, with your serialized presentation and occasional surreptitious re-wording of Searle (to cite but a few of your rhetorical sins), that I find challenging (though the word I would choose is "annoying").
Easy
Delete>No, that is not what he asserts. What he asserts is:
"one presupposes the reality and knowability of the mental"
>You added the word "states" which changes the meaning in a significant way by making it more precise. Mental states are indeed real and knowable, and by coming to know them one can see that the Chinese Room argument has no merit. But "the mental" is much vaguer, and so one can -- and Searle does -- tacitly define it in whatever way is necessary in order to support one's argument.
Your argument seems to rely on fixating on a single word or phrase, while conveniently ignoring the broader context. It's a familiar tactic of yours, one that turns discussions into word games rather than addressing the real issues. Insisting on hyper-specific definitions or feigning indignation over a particular word doesn’t strengthen your case -- it simply sidesteps the conversation. There's also an air of intellectual superiority here, as if your perspective is somehow beyond reproach, which makes it difficult to have a meaningful exchange of ideas.
Let's review Searle's full statement:
"The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In 'cognitive sciences' one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects."
Look at that, Searle wrote b>cognitive state(s) three times. It was only in his concluding sentence when he a more general point that the study of cognitive sciences presumes certain facts in order to advance, and the physical sciences do the same. It's a methodological assumption used to proceed with research.
Hence my statement "He asserts that in any study of cognition, one assumes that mental states are real and knowable" is a reasonable conclusion of what Searle is asserting, when one reads and understands the entirety of his statement.
> (You have been doing the same thing with the word "determinate", applying it variously to the meanings of words, the semantics of formal systems, and God only know what else, I haven't been keeping an inventory.)
I provided a definition and have used it consistently.
"Note that in this context, "determinacy" (and its negation) are not related to scientific or engineering concepts of determinism. It has nothing to do with physical causality (such as a stop light cycles between Green-Yellow-Red in a deterministic fashion). Physical properties are "indeterminate" in the sense that they don't fix one particular meaning rather than another."
> this is the fallacy of reification.
>Clearly you did not read past the title of the paper I linked to.
Oh, I assure you, I ventured far beyond the title. Maybe it’s not that I didn’t read the paper—it’s that I didn’t reach the same earth-shattering conclusion you did. Care to enlighten me on what I must have missed?
>No, Searle's arguments are easy to counter.
Yet you can't do it. Searle is one of the more notable philosophers of the late 20th century, with notable contributions to the philosophy of language, philosophy of mind, and social philosophy. His "Chinese Room" argument is a powerful argument that meaning cannot be derived from syntax and no one has developed a strong response to refute it.
Now, if you want to refute the "Chinese Room," prove how meaning can be derived from syntax. It's that simple (or, as you said, "easy"). Proceed.
> Your argument seems to rely on fixating on a single word or phrase, while conveniently ignoring the broader context.
DeleteSingle words can make significant changes in meaning. The broader context is that the Chinese Room is an argument ad ignorantum: "I understand how the Room works, but I don't understand how the human brain works, therefore the human brain must have some kind of magic pixie dust (mental states, intention, whatever) that the Room does not."
> turns discussions into word games
Languages are theories.
> rather than addressing the real issues.
Maybe you've forgotten, but this comment thread is on a post whose title is "The Trouble With Big Numbers" and whose thesis is that the ability of human brains to conceive of the infinite is not a fatal objection to the scientific project. It is you who have taken us on a tangent by bringing up Searle.
> Hence my statement "He asserts that in any study of cognition, one assumes that mental states are real and knowable" is a reasonable conclusion of what Searle is asserting, when one reads and understands the entirety of his statement.
No. "Cognitive state" and "mental state" are not synonyms. A mental state is a physical state of your brain, and I'm pretty sure everyone agrees those exist. A *cognitive* state implies *cognition*, which is how he sneaks a dualistic assumption in through the back door. Whether cognitive states equate with mental states is the very thing that is in dispute:
"My discussion here will be directed at the claims I have defined as those of strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition..."
> I provided a definition [of "determinate"] and have used it consistently.
No, this is the definition you gave:
"Something is "determinate" in the sense in question here if there is an objective fact of the matter about whether it has one rather than another of a possible range of meanings."
But *meaning* is something that applies to *words*, not physical processes. Your entire line of argument here is a category error.
> Oh, I assure you, I ventured far beyond the title. Maybe it’s not that I didn’t read the paper—it’s that I didn’t reach the same earth-shattering conclusion you did. Care to enlighten me on what I must have missed?
Well, you wrote, "in physical sciences one has to presuppose the reality and knowability of physical objects." If you don't see how a paper entitled "There are no particles, only fields" is a direct refutation of that claim, I don't even know where to begin to explain it to you.
> Searle is one of the more notable philosophers of the late 20th century, with notable contributions to the philosophy of language, philosophy of mind, and social philosophy.
People with credentials get things badly wrong all the time. Kary Mullis was a Nobel laureate and an HIV denialist. Linus Pauling thought vitamin C could cure anything.
> His "Chinese Room" argument is a powerful argument that meaning cannot be derived from syntax and no one has developed a strong response to refute it.
I guess we'll have to agree to disagree about that.
Reification
ReplyDelete>Single words can make significant changes in meaning.
Focusing on a single word is your preferred method of argument. While I agree that words can have significant impact, it’s interesting how often you seem to dismiss entire arguments based on one word you find disagreeable. Words, after all, get their meaning from context. Fixating on one detail while ignoring the broader picture might be convenient, but it doesn't lead to real understanding. I’d suggest engaging with the entire argument next time—you might find it more enlightening.
>Languages are theories.
That’s an intriguing analogy, but it oversimplifies the complexity of language. Theories are frameworks used to explain phenomena, whereas languages are tools for communication, shaped by culture, context, and usage. While both involve structure and rules, they serve different purposes.
>No. "Cognitive state" and "mental state" are not synonyms. A mental state is a physical state of your brain, and I'm pretty sure everyone agrees those exist. A *cognitive* state implies *cognition*, which is how he sneaks a dualistic assumption in through the back door.
You're right that "cognitive state" and "mental state" aren't exact synonyms -- rather, a cognitive state is a subset of mental states. That said, Searle isn’t assuming a dualistic framework here. He’s exploring how cognition relates to consciousness and physical states. The debate is about whether cognitive states can be reduced to physical mental states, but acknowledging the reality of mental states doesn’t automatically imply dualism. Searle’s argument engages with this complexity directly, rather than slipping in an assumption unnoticed.
>But *meaning* is something that applies to *words*, not physical processes.
Very good! You get it right occasionally. I expect you'll now stop assigning meaning to physical processes. For example, computation is not an intrinsic property of physical systems.
>Well, you wrote, "in physical sciences one has to presuppose the reality and knowability of physical objects." If you don't see how a paper entitled "There are no particles, only fields" is a direct refutation of that claim, I don't even know where to begin to explain it to you.
First, why do you think a "field" cannot be an "object"? An "object" is often understood as anything that exists or can be thought of as having properties and standing in relations. It doesn't have to be a physical thing; an object can also be abstract. From this perspective, a field could be considered an object if it has definable properties, behaves in a regular way, and interacts with other entities.
Second, your response is a non-sequitar. You had referenced the paper, and I had replied, "Ha, this is the fallacy of reification." Your response is that I hadn't read beyond the title. You didn't engage with the reification fallacy at all.
>> His "Chinese Room" argument is a powerful argument that meaning cannot be derived from syntax and no one has developed a strong response to refute it.
>I guess we'll have to agree to disagree about that.
You didn't respond to this:
>>Now, if you want to refute the "Chinese Room," prove how meaning can be derived from syntax. It's that simple (or, as you said, "easy"). Proceed.
Delete> it’s interesting how often you seem to dismiss entire arguments based on one word you find disagreeable.
It's not that I find the words disagreeable, it is that your arguments are *wrong*, and the reason they are wrong often turns on your use of a single word. Don't try to blame me for that.
(Often your arguments are wrong for multiple reasons, but focusing on one word just turns out to be the low-lying fruit. If you want to move past that, you'll need to start using words more precisely.)
>> Languages are theories.
> That’s an intriguing analogy, but it oversimplifies the complexity of language. Theories are frameworks used to explain phenomena, whereas languages are tools for communication, shaped by culture, context, and usage. While both involve structure and rules, they serve different purposes.
Languages are not *just* theories. But in the context of advancing a scientific argument, the choice of words is as much a part of the theory as anything else, and is fair game for criticism.
> Searle isn’t assuming a dualistic framework here.
Yes, he is. He's just doing it tacitly.
> The debate is about whether cognitive states can be reduced to physical mental states, but acknowledging the reality of mental states doesn’t automatically imply dualism. Searle’s argument engages with this complexity directly, rather than slipping in an assumption unnoticed.
He slips in myriad assumptions, not least of which is that anything that could possibly go on inside a robot brain while it is speaking Chinese is analogous to a human manipulating *Chinese characters* according to rules. *That* is Searle's Big Mistake. Yes, a computer can understand Chinese. Yes, it can do this by manipulating symbols. No, those symbols will (almost certainly) not be Chinese characters.
You have to remember that Searle was writing in 1980, at the height of fashion for symbolic AI, and long before computer technology was up to the task of building neural networks at scale. Searle's argument with respect to symbolic AI as it was being practiced in 1980 is actually probably correct. But it is as irrelevant today as an argument published in 1902 arguing that ornithopters are impossible.
> computation is not an intrinsic property of physical systems.
Where did I say that it was?
> An "object" is often understood as anything that exists or can be thought of as having properties and standing in relations. It doesn't have to be a physical thing; an object can also be abstract. From this perspective, a field could be considered an object if it has definable properties, behaves in a regular way, and interacts with other entities.
This definition begs the question by its use of the word (any)thing. What qualifies as a "thing"?
It's fine if you want to call a correlation or a field a "thing" informally, but then don't use *your* conflation of two *different* kinds of "things" to criticize *my* theory.
> Now, if you want to refute the "Chinese Room," prove how meaning can be derived from syntax.
Prove? You need to re-read this focusing on Myth #3. Also review the Church-Turing thesis.
b>Symbolology
ReplyDelete@Ron:
> and the reason they are wrong often turns on your use of a single word. Don't try to blame me for that.
What you are blameworthy for is focusing in a a single word and ignoring the context and meaning of the entire text. That is the coin of your method, and I doubt you'll reform your ways.
>He slips in myriad assumptions, not least of which is that anything that could possibly go on inside a robot brain while it is speaking Chinese is analogous to a human manipulating *Chinese characters* according to rules. *That* is Searle's Big Mistake. Yes, a computer can understand Chinese. Yes, it can do this by manipulating symbols. No, those symbols will (almost certainly) not be Chinese characters.
Searle's Chinese Room argument isn't merely about whether the symbols manipulated by the computer (or person in the room) are Chinese characters or something else. Rather, the core of the argument is about the nature of understanding. Searle argues that symbol manipulation alone -- regardless of whether the symbols are Chinese characters or binary code -- does not result in genuine understanding or comprehension. The argument is meant to show that syntax (rules for manipulating symbols) is not sufficient for semantics (meaning and understanding). So even if the symbols were something other than Chinese characters, it wouldn't change Searle's point.
The technical details of how the computer is programmed to process information is irrelevant. Whether through symbolic AI or neural networks, the Chinese Room argument asks: "Does the system really understand, or is it just simulating understanding?" In the case of neural networks, they are still processing input and generating output based on mathematical rules, which Searle would argue doesn't amount to genuine understanding, even if the system's behavior appears more human-like. No one confuses the simulation of weather for actual weather.
> computation is not an intrinsic property of physical systems.
>Where did I say that it was?
It's in the other comment thread, 8/19/2024 at 2:30 AM:
>> Computation is not intrinsic to the physics of system, but assigned to it by an observer.
>Um, no. That is just ridiculous. You're confusing the theory of computation with quantum measurements or some such category error.
> and the reason they are wrong often turns on your use of a single word. Don't try to blame me for that.
Delete> What you are blameworthy for is focusing in a a single word and ignoring the context and meaning of the entire text.
But single words can *determine* the meaning of the text. The word "state" is particularly significant.
> That is the coin of your method, and I doubt you'll reform your ways.
What you call "focusing on a single word" I call "using language with precision." And you're right, I'm not going to "reform my ways" in that regard because the "coin of your method" is sloppy thinking.
> Searle's Chinese Room argument isn't merely about whether the symbols manipulated by the computer (or person in the room) are Chinese characters or something else. Rather, the core of the argument is about the nature of understanding. Searle argues that symbol manipulation alone -- regardless of whether the symbols are Chinese characters or binary code -- does not result in genuine understanding or comprehension.
Yes, I understand that. The problem is that the Searle's argument is chock-full of red herrings. Consider: I replace the human in the Room who does not understand Chinese with one who speaks Chinese but can't read or write. That human is in *exactly* the same situation w.r.t. the symbol manipulation as one who does not understand Chinese, and yet this person does understand Chinese. So the fact that the human in the Room doesn't understand Chinese can't be relevant.
The kind of symbol-manipulation that a Chinese Room that actually understand Chinese would have to carry out would be a simulation of the brain of a person who understood Chinese. *That* is the kind of Room that you would have to show doesn't understand Chinese. Good luck with that. The I/O behavior of such a room would be indistinguishable from a Chinese-speaking human, so the only possible basis for deciding that this Room didn't understand Chinese would be meat bigotry. (That term was invented by Scott Aaronson, just to give credit where it's due.)
> The technical details of how the computer is programmed to process information is irrelevant.
No, the technical details are the whole ballgame.
> computation is not an intrinsic property of physical systems.
>Where did I say that it was?
It's in the other comment thread, 8/19/2024 at 2:30 AM:
>> Computation is not intrinsic to the physics of system, but assigned to it by an observer.
>Um, no. That is just ridiculous. You're confusing the theory of computation with quantum measurements or some such category error.
Ah. Well, you have to consider the context:
R: ... general computation at its core is actually not that complicated. When you start to put together random things at a certain level of complexity you can hardly avoid building a Turing machine.
So yes, you're right, in order to find computation you do have to look for it. The point I was making there was that if you look for it, you will find it. And the explanation for *that* is that computation is a figment of the human imagination, but that there really is something there to be found.
The Thing, and the Things It Left Behind
ReplyDelete>This definition begs the question by its use of the word (any)thing. What qualifies as a "thing"?
A "thing" is a broad and generic term used to refer to any entity or item that can be said to exist, either physically or conceptually. Philosophers often use "thing" when discussing existence in a very general sense. A thing can be:
* Physical (e.g., a rock, a tree).
* Abstract (e.g., love, justice).
* An event or process (e.g., a thunderstorm).
In metaphysics, "thing" is often synonymous with "entity" or "being," meaning something that has some form of existence, regardless of its nature.
"Anything" refers to the most inclusive concept. It can include everything that can possibly exist, whether physical, abstract, real, imaginary, or hypothetical.
In logic and philosophical discussions, "anything" is often used when talking about the broadest possible range of entities or possibilities.
"Anything" can also refer to an unknown or unspecified "thing" or entity. For example, "Anything could happen" suggests that no specific event is being named, but all possibilities are included.
"Anything" encompasses both "things" and "objects," as it refers to the entire scope of what could exist, could be conceived of, or could be logically discussed.
>> Now, if you want to refute the "Chinese Room," prove how meaning can be derived from syntax.
>Prove? You need to re-read this focusing on Myth #3. Also review the Church-Turing thesis.
Ah yes, "prove," one of your trigger words.
Why assume my challenge has to do with science? You could use mathematics, in which one can prove statements.
Delete>> This definition begs the question by its use of the word (any)thing. What qualifies as a "thing"?
> A "thing" is a broad and generic term used to refer to any entity or item that can be said to exist, either physically or conceptually.
But that's still circular:
> An "object" is often understood as anything that exists
So I guess I have to ask you to define "exists".
> Ah yes, "prove," one of your trigger words.
Perhaps I have not made this sufficiently clear, but I do not believe that your goals here are honorable, and that you are engaging in this discussion to persuade rather than to learn. Specifically, I think you're here to advance Christian apologetics. Because of that, I think it's best to try to nip bogus arguments in the bud before they spin even more wildly out of control than they already are.
> Why assume my challenge has to do with science?
Because it's a comment to a blog post which is part of my series on the scientific method. (See above about your motives here not being honorable.)
Lexington to Oak Ridge
ReplyDelete@Ron:
>Perhaps I have not made this sufficiently clear, but I do not believe that your goals here are honorable
>(See above about your motives here not being honorable.)
I believe it's important to maintain a respectful and constructive dialog, even when we disagree. I would like to address your comment about my goals not being honorable. I can assure you that my intentions have been, and continue to be, focused on genuine and open discussion.
If my actions or words have led you to believe otherwise, I’d appreciate the opportunity to clarify and correct any miscommunication. That said, your statement was a personal judgment, and I find it unfair. I respectfully ask that you reconsider your words and offer an apology so that we can move forward with mutual respect.
> my intentions have been, and continue to be, focused on genuine and open discussion.
DeleteHow are you advancing those intentions with the headline: "Lexington to Oak Ridge"? What does that have to do with "genuine and open discussion" about large numbers?
(For that matter, why do you put headlines on blog comments at all? In 20 years I've never seen anyone but you do that, here or anywhere else. It's really annoying. It's the on-line equivalent of walking into a party at someone's house and shouting at the top of your lungs, "Hey, everyone, look at MEEEEEEEEE!!!!!!" And then doing it again. And again. And again and again and again and again. It got old a long, long time ago.)
Anything
ReplyDelete@Ron:
>why do you put headlines on blog comments at all?
Those aren't headlines. They can be anything (see above). The only soft requirement is that they be different from ones used recently. Their purpose is only to make my comments more distinguishable from others' comments in my RSS reader. That's it.