This is part of my series on the scientific method, but it's a bit of a tangent, an interlude if you will, so I'm not giving it a number. As you will see, that will turn out to be metaphorically significant. I'm writing this because my muse Publius raised the problem of infinity in comments on earlier installments in this series, and so I thought it would be worth discussing why these are problematic for mathematics but not for science.

(BTW, the title of this post is an allusion to something I wrote five years ago, which itself was an allusion to something I wrote fifteen years ago. I guess I'm just good at finding trouble.)

There is an old joke that goes something like this: one cave man says to another, "I'll bet you that I can name a bigger number than you." The second cave man responds, "You're on. What's your number?" The first caveman says triumphantly, "Four!" The second cave man thinks for a while and finally says, "You win."

The joke is not just that the second cave man couldn't count to five, but that it was a silly game to begin with because second player can (it would seem) always win by simply taking the number that the first player names and adding one. It seems obvious that you should be able to do that no matter what the first player says because otherwise there would have to exist a counter-example, a number to which it is not possible to add 1, and obviously there is no such counterexample, right?

Well, sort of. There are systems of arithmetic where four actually is the biggest number. For example, in modulo-5 arithmetic it is possible to add 1 to 4, but the result is zero. In modulo-5 arithmetic, four actually is the biggest number.

But this is obviously silly, notwithstanding that modular arithmetic really is a legitimate mathematical thing with lots of practical applications. There is obviously a number greater than four, namely five, the very number we had to deploy to describe the system in which there is no number greater than four. In fact, to describe a system of modular arithmetic whose biggest number is N we have to use a number one bigger than N. So this argument seems self-defeating.

There is another way to construct a system of arithmetic with a biggest number, and that is to simply *stipulate* that there is a biggest number, and that adding one to this number is just not allowed. Again, this might feel like cheating, but if we are using numbers to count actual physical objects, then there is already a *smallest* number: zero. So why could there not be a biggest one?

But this still feels like cheating, because if we can name the number that we want to serve as the biggest number, we can obviously (it would seem) name a number that is one more than that. So unlike zero, which is kind of a "natural" choice for a smallest number, there is no apparent "natural" choice for a biggest number. We can try playing tricks like "one more than the biggest number that we can actually name", but that is simply a fun paradox, not an actual number.

So it would appear that logic leaves us no choice but to accept that there is no biggest number, and so we have to somehow deal with apparently inescapable fact that there are an infinite number of numbers. But that leads to problems of its own.

Imagine that you have three buckets, each of which is capable of holding an infinite number of balls. Bucket #1 starts out full of balls while the other two are empty. You now proceed to execute the following procedure:

1. Take three balls out of bucket 1 and put them in bucket 2.

2. Take one ball out of bucket 2 and put it in bucket 3.

3. Repeat until bucket 1 is empty.

That third step should make you a little suspicious. I stipulated at the outset that bucket 1 starts out with an infinite number of balls, and so if you try to empty it three balls at a time it will never be empty. But we can fix that by speeding up the process: every time you go through the loop you have to finish it in half the time you took on the previous step. That will let you perform an infinite number of iterations in a finite amount of time. Again, you need to suspend disbelief a little to swallow the idea of doing every step twice as fast as the previous one, but you needed to do that when I asked you to imagine a bucket that contained an infinite number of balls in the first place, so having to deploy your imagination is already part of the game.

The puzzle is: when you finish, how many balls are in bucket #2?

The "obvious" answer is that there are an infinite number of balls in bucket #2. For every ball that gets removed from B2 and put in B3 there are two balls left behind in B2. So after every step there must be twice as many balls in B2 as B3. At the end there are an infinite number of balls in B3, so there must be even more -- twice as many in fact -- left behind in B2.

And this is our first hint of trouble, because there is no such thing as "twice infinity". If you multiply the number of counting numbers by 2 -- or any other finite number -- the result is equal to (the technical term is "can be put in one-to-one correspondence with") the number of counting numbers.

But now imagine that as we take the balls out of B1 and put them in B2 we mark them to keep track of the order in which we processed them. The first ball gets numbered 1, the second one gets numbered 2, and so on. Now when we pull them out in step 2, we pull them out *in order*: ball number 1 gets pulled out first, ball #2 gets pulled next, and so on. If we do it this way, then bucket 2 will be EMPTY at the end because every ball will have been pulled out at some point along the way! (In fact, we can modify the procedure to leave any number of balls in bucket 2 that we want. Details are left as an exercise.)

So clearly things get weird when we start to think about infinity. But actually, when dealing with large numbers, things get weird long before we get anywhere close to infinity.

There is a famously large number called a googol (the name of the Google search engine is a play on this). It is a 1 followed by 100 zeros, i.e.:

10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

Things are already getting a little unwieldy here. Is that really 100 zeros? Did you count them? Are you sure you didn't miss one or count one twice? To make things a little more manageable this number is generally written using an exponent: 10^100. But notice that we had to pay a price for shortening a googol this way: we lost the ability to add one! In order to write down the result of adding 1 to a googol we need to write 99 zeros:

10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001

One could argue for writing 10^100+1 instead, but that doesn't work in general. Consider adding 1 to:

3276520609964131756207215092068230686229472208995125701975064355186510619201918716896974491640125539

which is less than a third of a googol.

But a googol is not even close to the biggest number the human mind can conjure up. Next up is a googolplex, which is a 1 followed by a googol of zeros, i.e. 10^(10^100). Adding one to that without cheating and just writing 10^(10^100)+1 is completely hopeless. There are fewer than a googol elementary particles in our universe (about 10^80 in fact) so it is simply not physically possible to write out all of the digits in a googolplex. Even if we allowed ourselves to re-use material we couldn't do it. Our universe is only about 13 billion years old, which is less than 10^18 seconds. The fastest conceivable physical process operates on a scale called the Planck time, which is the time it takes for light to travel the diameter of a proton, about 10^-43 seconds. A single photon with a cycle time this short would have an energy of about 6.6 gigajoules, a little under two megawatt-hours, the energy equivalent of about one and a half tons of TNT. So even if we could build a computer that ran this fast it could not process the digits of a googolplex in the life span of the universe.

An obvious pattern emerges here: after 10^100 (a googol) and 10^(10^100) (a googolplex) comes 10^(10^(10^100)), a one followed by a googolplex of zeros. That number doesn't have a name, and there's not really any point in giving it one because this is clearly a Sisyphean task. We can keep carrying on forever creating bigger and bigger "power towers": 10^(10^(10^100)), 10^(10^(10^(10^100))) and so on.

What happens if we start to write power towers with large numbers of terms, like 10^10^10^10... repeated, say, 1000 times? To keep those from getting out of hand we have invent yet another new notation. Designing such a notation gets deep into the mathematical weeds which I want to steer clear of, so I'm just going to adopt (a minor variant of) something invented by Donald Knuth called up-arrow notation: A↑B means A^(A^(... <== B times. So 10↑5 means 10^(10^(10^(10^(10^10)))). 10↑10 is already too unwieldy for me to want to type out. But even at 5 iterations it is already challenging to communicate just how vast this number is. I can explicitly expand it out one level (10^(10^(10^(10^10000000000)))) but not two -- that would require about ten gigabytes of storage. Expanding it out to three levels would require more resources than exist in this universe. Four is right out.

But we are nowhere near done stretching the limits of our imagination. Up-arrow notation can be iterated: A↑↑B means A↑(A↑(A... <== B times. A↑↑↑B means A↑↑(A↑↑(A... <== B times, and so on. And this allows us to arrive -- but just barely -- at one of the crown jewels of big numbers, the famous Graham's number, which uses a number of up-arrows so big that it itself needs to be described using up-arrow notation.

Graham's number is already mind-bogglingly vast, but it's a piker compared to TREE(3). I'm not even going to try to explain that one. If you're interested there's a lot of material about it on the web. And even that is only just getting started. Everything I've described so far is still computable in principle (though not in practice). *If* we had a big enough computer and enough time then we could in principle actually write out all the digits of Graham's number or even TREE(3), or even TREE(TREE(3)) -- these patterns go on and on and on. But beyond these lie even bigger numbers which are uncomputable even in principle, though it's easy to show that they must exist. That will have to wait until I get around to discussing the halting problem. For now you will just have to take my word for it that there are a series of numbers called busy beavers which are provably larger than any number I've described so far, or even any number that can be constructed by any combination of techniques I've described so far. TREE(TREE(TREE.... iterated Graham's number of times? Busy beavers are bigger, but every one of them is nonetheless finite, exactly as far away from infinity as zero is.

And it gets even crazier than that. Let's suspend disbelief and imagine that we can actually "get to infinity" somehow. We're still not done, not by a long shot. It turns there are *different kinds* of infinity. The smallest one is the number of counting numbers or, equivalently, the number of ways you can arrange a finite set of symbols into finite-length strings. If you allow your strings to grow infinitely long then the number of such strings is strictly larger than the number of finite-length strings. And you can keep playing this game forever. The number of infinite two-dimensional arrays of symbols is strictly larger than the number of one-dimensional infinite strings of symbols, and so on and so one.

But wait, there's more! All this is just what we get when we treat numbers as measures of quantity. Remember the balls-and-buckets puzzle above, and how things got really strange when we allowed ourselves to paint numbers on the balls so we could distinguish one ball from another? It turns out that if we think about adding one as not just an indicator of *quantity* but of *position* that we can squeeze different kinds of infinities *in between* the ones we just constructed above. If we think of adding one as producing "the number after" rather than just the number that is "one more than" then we can introduce a number that is "the number after all the regular counting numbers". Mathematicians call that ω (the lower-case Greek letter omega). Then we can introduce the number after that (ω+1) and the number after that (ω+2) and so on until we get to ω+ω=2ω. And of course we can go on from there to 2ω+1, 2ω+2... 3ω, 4ω... ω*ω, ω^3, ω^4, ω^ω, ω↑ω, ω↑↑ω and so on until we get to ω with ω up-arrows, which mathematicians call ε_{0}. And then the whole game begins anew with ε_{0}+1, ε_{0}+ω, ε_{0}+ε_{0}...

These kinds of infinities are called *transfinite ordinals*, and they have two interesting features. First, the "size" of each of these numbers, that is, the number of numbers between zero and any one of them, is exactly the same as the number of regular counting numbers. If we think about numbers as referring to *position*, then each ordinal is "bigger" than the one before, but if we think about them as referring to *quantity* then each one is exactly the same "size". And second, the game of inventing new ordinals does not have a regular pattern to it. It requires creativity. The bigger you get, the harder it becomes to define what "add one" means. It gets so hard, that mathematicians who figure out how to do it have the resulting numbers named after them.

The study of big numbers, both finite and infinite, is a deep, deep rabbit hole, one that ultimately leads to Turing machines and the theory of computation, which is the deepest rabbit hole of all. It's fascinating stuff, and well worth studying for its own sake. But is any of this relevant for *science* or is it just an intellectual curiosity?

Until and unless we develop a "theory of everything" that allows us to predict the result of any experiment, we cannot rule out the possibility that this theory will involve very large numbers (by which I mean numbers that require power towers or beyond to represent), and possibly even infinities. But so far this has not been the case. There are only two situations in our current best theories where infinities arise, and in both of those cases there is every reason to believe that this is an indication of a problem with the theory and not a reflection of anything real.

Just in case you were wondering, those two situations are singularities inside black holes and self-interactions in quantum mechanics. In the case of singularities, general relativity predicts their existence while at the same time predicting that they can never be observed because they always lie inside the event horizon of a black hole. In the case of self-interactions, it just turns out that you can basically just ignore them. If you simply throw out the infinities when they pop up, everything works out and the predictions made by the theory just turn out to be 100% accurate. No one knows why, but that's the way it is.

But there is another situation where a kind of infinity pops up which is not so easily dismissed.

Suppose you travel exactly one mile in a straight line, then turn exactly 90 degrees and travel another mile, again in a straight line. If you then want to return to your starting point by traveling in a straight line, how far would you have to go?

This innocuous-seeming question is the gateway to a whole series of major mathematical and philosophical problems. You can get the answer by applying Pythagoras's theorem (which, BTW, was almost certainly not discovered by Pythagoras, but that's another story): it's the square root of 2. The problem arises when you actually try to write this quantity out as a number. The answer obviously has to be somewhere between 1 and 2, so it's not a whole number. It also has to be somewhere between 14/10 and 15/10 because (14/10)^2 is 196/100, which is a little less than 2, and (15/10)^2 is 225/100, which is a little more than 2. We can keep narrowing down this range to smaller and smaller intervals, but we can never find a ratio of two integers whose square is exactly 2. The square root of 2 is an irrational number. If we wanted to write it out exactly, we would need an infinite number of digits. We haven't even gotten past the number 2 and simple multiplication, and yet somehow infinity has managed to rear its ugly head.

I opened this post with a joke, so I'll close with another: a farmer hired a physicist to design a machine to shear sheep. After a few weeks the physicist submitted his report. It began, "Assume a spherical sheep."

It's funny because, of course, sheep aren't spherical. But there is a metaphorical spherical sheep hiding in our problem statement. It's in the phrase, "exactly one mile" (and also "exactly 90 degrees"). In order for the mathematical irrationality of the square root of 2 to matter in *the physical situation I have described* it really is critical to travel *exactly* one mile. If you deviate from this in the slightest then the distance back to your starting point can become a rational number, representable numerically with no error in a finite number of symbols. For example, if either leg of your journey is *exactly* one part in 696 longer than a mile then the return trip will be *exactly* 985/696 miles.

In fact, for any number you care to name, no matter how small, I can give you a smaller number such that adding that much distance to one leg of the trip will make the return distance be a rational number. That means that if you odometer has *any error at all*, no matter how small, the return distance could be rational.

Of course, "could be" is not the same as "is". It's possible that the actual underlying physical (or even metaphysical) reality is truly continuous, and actually does require an infinite number of symbols to describe. But here is the important question: *how could you ever possibly know*? What experiment could possibly demonstrate this? In order to *know* whether physical reality is truly continuous you would need to somehow obtain an infinite amount of data! To be able to tell, for example, whether our three-dimensional space is truly continuous, you would need to be able to measure a length to infinite precision. How would you do that? Forget the problems with actually *designing* such a device and how you would get around (say) thermal fluctuations and the Heisenberg uncertainty principle. I grant you arbitrarily advanced technology and even new physics, and simply ask: *what would the output of such a measuring device look like*? It can't be a digital display; those can only output a finite number of possible outputs. So it would have to be some kind of analog output, like an old-school volt meter. But that doesn't help either, because to read an analog gauge you still have to *look* at it, and your eye doesn't have infinite resolution. And even if your eye had infinite resolution, you would still have to contend with the fact that the needle of the gauge is made of atoms which are subject to thermal fluctuations. And if you tried to solve that problem by cooling the gauge down to absolute zero, you still ultimately have to contend with the Heisenberg uncertainty principle. (Yes, I know I granted you new physics, but you still have to be *consistent* with quantum mechanics.)

The ultimate fact of the matter is that no matter how hard we try, no matter what technology we invent, no matter how many resources we deploy, we will only ever have a finite amount of data, and so we can always account for that data with finite explanations. We can *imagine* infinities. They might even pop up unbidden in our mathematical models. But when they do, that will almost certainly be an indication that we've done something wrong because we can know for sure that neither infinite quantities nor infinite precision can ever be *necessary* to explain our observations. In fact, we can calculate pretty easily the amount of data our universe can possibly contain, and it's a tiny number compared to what the human imagination is capable of conceiving.

Infinities are like wizards and unicorns. They're fun to think about, but they aren't real.

ReplyDeletePowerset>

we cannot rule out the possibility that this theory will involve very large numbers ..., and possibly even infinities. But so far this has not been the case.Infinities are like wizards and unicorns. They're fun to think about, but they aren't real.Don't be so sure - Hadron physics and transfinite set theory.

>

It's in the phrase, "exactly one mile" (and also "exactly 90 degrees"). In order for the mathematical irrationality of the square root of 2 to matter in the physical situation I have described it really is critical to travel exactly one mile. If you deviate from this in the slightest then the distance back to your starting point can become a rational number, representable numerically with no error in a finite number of symbols.Not sure what you're trying to say here. If the distance was 1.1 mile per leg, the the length of the hypotenuse is √2.42, which is an irrational number.

Furthermore, we can just define the actual distance you travelled as a new unit -- call it a 1 Garret. So whatever distance you travelled, we call it 1 Garret. Then the hypotenuse is √2 Garret.

In addition, I can likely draw a circle with more fidelity to the ideal than a triangle -- the circle has radius 1 Garret, with area π Garret.

>

Of course, "could be" is not the same as "is". It's possible that the actual underlying physical (or even metaphysical) reality is truly continuous, and actually does require an infinite number of symbols to describe. But here is the important question: how could you ever possibly know? What experiment could possibly demonstrate this? In order to know whether physical reality is truly continuous you would need to somehow obtain an infinite amount of data!Models of physical systems are abstractions of those systems, and the models are therefore simpler than the physical reality. All models are wrong, some models are useful (George Box).

I grant you arbitrarily advanced technology and even new physics, and simply ask: what would the output of such a measuring device look like?Such a hypothetical measuring device could simply output √2.

>

In fact, we can calculate pretty easily the amount of data our universe can possibly contain, and it's a tiny number compared to what the human imagination is capable of conceiving.There's your proof that there are non-physical mental properties. Map every mental property onto a physical particle in the universe. Now take the powerset (the set of all subsets) of all the particles in the universe. The members of the powerset do not correspond to any particle in the universe, as you already mapped those to other mental properties.

> > Infinities are like wizards and unicorns. They're fun to think about, but they aren't real.

ReplyDelete> Don't be so sure

If the best you can do by way of challenging this position is a forty-year-old paper with (AFAICT) less than ten citations over that time, that does little to shake my confidence.

> Not sure what you're trying to say here. If the distance was 1.1 mile per leg, the the length of the hypotenuse is √2.42, which is an irrational number.

I didn't want to get into the weeds of irrational vs non-algebraic vs non-analytic vs uncomputable. Take "irrational" as a sloppy shorthand for "uncomputable" if you like. What I'm really arguing against is your claim that the reals are necessary to model reality.

> Furthermore, we can just define the actual distance you travelled as a new unit

I don't see how that helps. You still have to be able to know that the two legs are exactly the *same*. (You also have to somehow measure a 90 degree angle to infinite precision.)

> In addition, I can likely draw a circle with more fidelity to the ideal than a triangle

I doubt that very much. How are you going to accomplish this miraculous feat? Are you going to use a compass? Paper? Ink? Those are all made of atoms, which, you will find, present major challenges for doing anything past a certain point of precision.

> Models of physical systems are abstractions of those systems,

You are really fond of the word "abstract" and its derivatives, but you have to be careful how you use them. I've denied the existence of "abstract objects" as you have defined them and you have not (yet -- see below) provided any evidence to the contrary. Here you seem to be using "abstraction" simply to beg the question: you just *proclaim* that models are abstractions (maybe true, depending on what you mean) and then conclude that"

> and the models are therefore simpler than the physical reality.

But how do you know that an abstraction is necessarily simpler than the thing that it abstracts? My guess is that this is part of your *definition* of "abstraction" (but I have to guess because you haven't actually defined it) in which case this is classic question-begging.

> Such a hypothetical measuring device could simply output √2.

Fair enough, but how would it handle a measurement that is uncomputable?

> There's your proof that there are non-physical mental properties. Map every mental property onto a physical particle in the universe. Now take the powerset (the set of all subsets) of all the particles in the universe. The members of the powerset do not correspond to any particle in the universe, as you already mapped those to other mental properties.

Particles are not what you need to count. Ideas are not made of particles. Ideas are made of *information*, which are made of *states*, not systems/particles. That's the subject of my next installment. Stay tuned.

ReplyDeleteGeorge Box>

What I'm really arguing against is your claim that the reals are necessary to model reality.Take it up with Einstein. GR uses real numbers.

Now, we can always build

other modelsthat don't need real numbers. Most of the models we build only need floating point numbers. The detail you put in the model depends on what you need from the model.>

But how do you know that an abstraction is necessarily simpler than the thing that it abstracts?In the case of models, because we choose to make them simpler.

Given that we build models out of mathematics, the resulting models are abstract because mathematics is abstract.

If you need to review what abstract objects are, try here: Abstract Objects

> GR uses real numbers.

ReplyDeleteYes, but as you yourself point out, we can always build other models that don't need real numbers

> > how do you know that an abstraction is necessarily simpler than the thing that it abstracts?

> In the case of models, because we choose to make them simpler.

Then it's not necessary.

> If you need to review what abstract objects are, try here: Abstract Objects

Ah. So, as I suspected, abstract objects are philosophical nonsense. Good to know.

ReplyDeleteε>

Yes, but as you yourself point out, we can always build other models that don't need real numbersAs I like to quote, "All models are wrong. Some models are useful." (George Box)

We rank models based on their error (call it ε). Models with smaller ε are considered better. However, a model with smaller ε may be much harder to understand, implement, and compute. So we consider how much error we can tolerate for our application, then choose the model that is suitable for our purpose.

GR is great for computing the orbit of Mercury. Yet, to compute the time it takes a pencil to fall from my desk to the ground, I'll use Newton's equations. Newton's equations are arguably more useful for applications on the surface of earth, but in some applications, the ε is too high, so more complicated models are needed.

Our best phyiscal models assume that nature contains real-valued quantities that vary continuously along a continuum. A few prominent examples are classical mechanics, electromagnetism (Maxwell's equations), GR, quantum field theory, thermodynamics, and fluid dynamics (Navier-Stokes equations).

Now most scientists are engineers will likely compute these models using floating point numbers instead of real numbers, thereby increasing the ε of the model. This is a practical choice, as 1) modern computers implement floating point in hardware, and 2) most real numbers in any continuous interval are Turing-uncomputable. An objects change in speed, or change in spatial location, may be transformations of one Turing-uncomputable real-valued quantity into another. A transformation of one Turing-uncomputable value into another Turing-uncomputable value is certainly a Turing-uncomputable operation. Yet since scientists and engineers want to compute an answer, they accept the extra ε and use floating point.

>

Ah. So, as I suspected, abstract objects are philosophical nonsense. Good to know.Now here is something science can explain. Students, when they don't understand concepts, often state that the subject "is stupid" and "I'll never need this" (this is often heard in math classes). They do this to protect their self-esteem and to shift the focus away from their own confusion.

Philosophy, by nature, deals with questions outside the realm of where there are settled, reliable, widely accepted answers. All philosophical questions are unsolved and/or controversial, by definition.

If a philosopher does definitively solve a question, it passes out of the realm of philosophy.

All sciences originate in solved philosophies.Yet Science cannot establish the methodology of how to do science, that is done by philosophy. Science cannot establish a metric of goodness for a scientific study, that is done by philosophy. Science cannot identify ANY objective to pursue in one's life, including for a scientific investigation. That is generally done at a much lower level of intuition, in pre-philosophic motivations. Philosophy provides the methodology to evaluate, question and possibly change one's intuited objectives.

They are different kinds of inquiry, and one leads to the other. There are no current questions that philosophy has a solution to that science lacks, but future sciences might yet be born from work philosophers are doing today.

> As I like to quote, "All models are wrong. Some models are useful." (George Box)

ReplyDeleteYou do indeed like to quote that, and it might even be true, but it misses a very important point: scientific models become less wrong over time.

> We rank models based on their error (call it ε). Models with smaller ε are considered better.

Not necessarily. It depends on your purpose. GR has a smaller error than Newton, but most of the time it doesn't matter and Newton works just fine, as you yourself point out:

> However, a model with smaller ε may be much harder to understand, implement, and compute. So we consider how much error we can tolerate for our application, then choose the model that is suitable for our purpose.

Yes, that is exactly right.

> Our best phyiscal models assume that nature contains real-valued quantities that vary continuously along a continuum.

Our best *current* models make this assumption. It does not follow that the continuum is *actually necessary* or that a better model can't be had by discharging this assumption.

BTW, even models that make the continuum assumption mostly do so as part of their formalism, not as part of their ontology. The only theory I know of that relies on non-quantized space as part of its ontology is the Bohm interpretation of QM, and that is one of the reasons I'm personally skeptical of it.

> > Ah. So, as I suspected, abstract objects are philosophical nonsense. Good to know.

> Now here is something science can explain. Students, when they don't understand concepts, often state that the subject "is stupid" and "I'll never need this" (this is often heard in math classes). They do this to protect their self-esteem and to shift the focus away from their own confusion.

That I'm just too stupid to understand "abstract objects" is a hypothesis that I am willing to seriously entertain. But here's your challenge: I have quite a bit of evidence that I'm not all that stupid, some of which involves people paying me quite a lot of money to do things that are not generally associated with the kind of abject stupidity that would be necessary to render me *uneducatable* about abstract objects. It seems much more likely to me that either your pedagogy is poor, or (far more likely) that abstract objects actually are exactly the philosophical bullshit that they appear to be.

BTW, one way you can persuade me otherwise is to point to a practical result that someone has achieved by taking "abstract objects" seriously. I predict you will be unable to do so.

> Philosophy, by nature, deals with questions outside the realm of where there are settled, reliable, widely accepted answers. All philosophical questions are unsolved and/or controversial, by definition.

The same is true of science. The difference is that science has an objective criterion for filtering out bad ideas, and philosophy doesn't, and so bad ideas have much more longevity in philosophy than they do in science.

This is not to say that philosophy does not produce good ideas on occasion -- it does. It's just a lot harder to separate the wheat from the chaff when you can't do an experiment.

ReplyDeleteWhat's a number?>

BTW, one way you can persuade me otherwise is to point to a practical result that someone has achieved by taking "abstract objects" seriously. I predict you will be unable to do so.On Tuesday, I needed to buy a lemon and some broccoli at the grocery store. I went into the produce department, and without reading any signs or labels, I was able to locate the lemons and choose one, then locate the brocolli and choose a couple of heads.

Yesterday I added a tip onto lunch tab using numbers.

We use abstract objects all the time.

Try living without abstract objects tomorrow. Just wake up and say, "Today, I'm going to ignore numbers." When paying for your coffee, just give the barista a wad of cash and hope they don't charge you an unspecified amount of money.

You're begging the question, assuming the thing you are trying to prove. You can't show that numbers are abstract concepts by assuming that numbers are abstract concepts.

ReplyDelete

ReplyDeleteEndquote@Ron:

>

You're begging the question, assuming the thing you are trying to prove. You can't show that numbers are abstract concepts by assuming that numbers are abstract concepts.Let me quote the person you like to quote most often:

The right question to ask is not, "Does X exist." The answer is always "yes". The right question is, "What is the nature of X's existence?" or "To which ontological category does X belong?"(source)

Hence, I am not begging the question.

>> In addition, I can likely draw a circle with more fidelity to the ideal than a triangle

>

I doubt that very much. How are you going to accomplish this miraculous feat? Are you going to use a compass? Paper? Ink? Those are all made of atoms, which, you will find, present major challenges for doing anything past a certain point of precision.Are you asserting that one cannot build an ideal geometric circle out of matter, energy, or some combination of both? What if I got a lot of photons to orbit around a black hole? Electrons to orbit in a magnetic field? NIST made a sphere.

> Are you asserting that one cannot build an ideal geometric circle out of matter, energy, or some combination of both?

ReplyDeleteCorrect.

> What if I got a lot of photons to orbit around a black hole?

How exactly are you going to accomplish that? (You should probably read this.)

> Electrons to orbit in a magnetic field?

Do I really need to explain to you why that won't work? (Hint: what happens when you accelerate a charged particle?)

> NIST made a sphere.

First, a sphere is not a circle.

Second, that "sphere" deviates from sphericalness by many hundreds of atomic radii. That's an impressive engineering feat, but it's nowhere near the Platonic ideal.

Even if you could make a sphere that was spherical to within the radius of a silicon atom, you'd still be left with the problem of defining exactly where the outer surface of this "sphere" actually was because Heisenberg.

ReplyDeleteAgreement at last>> Are you asserting that one cannot build an ideal geometric circle out of matter, energy, or some combination of both?

@Ron:

>

Correct.Well, I agree with you.

That's nice. All is well and good. Hey, look at those flowers.

Uh oh, ah, your position creates a real problem for your naturalistic world view. I can think of a ideal geometric circle. Yet the brain is physical, so it can't be used to build an ideal geometric circle. Hence we must conclude that our thoughts, the mind, is immaterial.

> I can think of a ideal geometric circle.

DeleteI'll bet you can think of a lot of other things that don't exist: Wizards. Unicorns. Santa Claus. Just because you can imagine these things does not show that the mind is immaterial any more than the fact that I can put a copy of Lord of the Rings on my computer shows that computers are immaterial.