Wednesday, April 24, 2024

The Scientific Method, part 4: Eating elephants and The Big News Principle

This is the fourth in a series about the scientific method and how it can be applied to everyday life.  In this installment I'm going to suggest a way to approach all the science-y stuff without getting overwhelmed.

There is an old joke that goes, "How do you eat an elephant?  One bite at a time."  That answer might be good for a laugh, but it wouldn't actually work, either for a real elephant (if you were foolish enough to actually attempt to eat a whole elephant by yourself) nor for the metaphorical science elephant.  Modern science has been a thing for over 300 years now, with many millions of people involved in its pursuit as a profession, and many millions more in supporting roles or just doing it as a hobby.  Nowadays, over 100,000 scientific papers are published world-wide every single day.  It is not possible for anyone, not even professional scientists, to keep up with it all.

Fortunately, you don't have to consume even a tiny fraction of the available scientific knowledge to get a lot of mental nutrition out of it.  But there are a few basics that everyone ought to be familiar with.  For the most part this is the stuff that you learned in high school science class if you were paying attention.  I'm going to do a lightning-quick review here, a little science-elephant amuse bouche.  What I am about to tell you may all be old hat to you, but later when I get to the more interesting philosophical stuff I'll be referring back to some of this so I want to make sure everyone is on the same page.

It may be tempting to skip this, especially if you grew up hating science class.  I sympathize.  Science education can be notoriously bad.  It may also be tempting to just leave the elephant lying in the field and let the hyenas and vultures take care of it.  The problem with that approach is that the hyenas and vultures may come for you next.  In this world it really pays to be armed with at least a little basic knowledge.

I'm going to make a bold claim here: what I am about to tell you, the current-best-explanations provided by science, are enough to account for all observed data for phenomena that happen here on earth.  There are some extant Problems -- observations that can't be explained with current science -- but to find them you have to go far outside our solar system.  In many cases you have to go outside of our galaxy.  How can I be so confident about this after telling you that there is so much scientific knowledge that one person cannot possibly know it all?

The source of my confidence is something I call the Big News principle.  To explain it, I need to clarify what I mean by "all the observed data."  By this I do not mean all of the data collected in science labs, I mean everything that you personally observe.  If you are like most people in today's world, part of what you observe is that science is a thing.  There are people called "scientists".  There is a government agency called NASA and another one called the National Science Foundation.  There are science classes taught in high schools and universities.  There are science journals and books and magazines.

The best explanation for all this is the obvious one: there really are scientists and they really are doing experiments and collecting data and trying to come up with good explanations for that data.  This is not to say that scientists always get it right; obviously scientists are fallible humans who sometimes make mistakes.  But the whole point of science is to find those mistakes and correct them so that over time our best explanations keep getting better and better and explain more and more observations and make better and better predictions.  To see that this works you need look no further (if you are reading this before the apocalypse) than all the technology that surrounds you.  You are probably reading this on some kind of computer.  How did that get made?  You probably have a cell phone with a GPS.  How does that work?

It's not hard to find answers to questions like "how does a computer work" and "how does GPS work" and even "how does a search engine work."  Like everything else, these explanations are data which requires an explanation, and the best explanation is again the obvious one: that these explanations are actually the result of a lot of people putting in a lot of effort and collecting a lot of data and reporting the results in good faith.  This is not to say that there aren't exceptions.  Mistakes happen.  Deliberate scientific misconduct happens.  A conspiracy is always a possibility.  But if scientific misconduct were widespread, if falsified data were rampant, why does your GPS work?  If there is a conspiracy, why has no one come forward to blow the whistle?

This is the Big News Principle: if any explanation other than the obvious one were true, then sooner or later someone would present some evidence for this and it would be Big News.  Everyone would know.  The absence of Big News is therefore evidence that no one has found any credible evidence against the obvious explanation, i.e. that there are in fact no major Problems with the current best theories.

The name "Big News Principle" is my invention (as far as I know) but the idea is not new.  The usual way of expressing it is with the slogan "extraordinary claims require extraordinary evidence."  I think this slogan is misleading because it gets the causality backwards.  It is not so much that extraordinary claims require extraordinary evidence, it's that if an extraordinary claim were true, that would necessarily produce extraordinary evidence, and so the absence of extraordinary evidence, the absence of Big News, is evidence that the extraordinary claim, i.e. the claim that goes against current best scientific theories, is false.

The other important thing to know is that not all scientific theories are the same with respect to producing Big News if those theories turn out to be wrong.  Some theories are very tentative, and evidence that they are wrong barely makes the news at all.  Other theories are so well established -- they have been tested so much and have so much supporting evidence behind them -- that showing that they are wrong would be some of the Biggest News that the world has ever seen.  The canonical example of such a theory is the first and second laws of thermodynamics, which basically say that it's impossible to build a perpetual motion machine.  This is so well established that, within the scientific community, anyone who professes to give serious consideration to the possibility that it might be wrong will be immediately dismissed as a crackpot.  And yet, all anyone would have to do to prove the naysayers wrong is exhibit a working perpetual motion machine, which would, of course, be Big News.  It's not impossible, but to say that the odds are against you would be quite the understatement.  By way of very stark contrast, our understanding of human psychology and sociology is still very tentative and incomplete.  Finding false predictions made by some of those theories at the present time would not be surprising at all.

So our current scientific theories range from extremely well-established ones for which finding contrary evidence would be Big News, to more tentative ones for which contrary evidence would barely merit notice.  But there is more to it than just that.  The space of current theories has some extra and very important structure to it.  The less-well-established theories all deal with very complex systems, mainly living things, and particularly human brains, which are the most complicated thing in the universe (as far as we know).  The more well-established theories all deal with simpler things, mainly non-living systems like planets and stars and computers and internal combustion engines.

This structure is itself an observation that requires explanation.  There are at least two possibilities:

1.  The limits on our ability to make accurate predictions for complex phenomena is simply a reflection of the fact that they are complex.  If we had unlimited resources -- arbitrarily powerful computers, arbitrarily accurate sensors -- we could based on current knowledge make arbitrarily accurate predictions for arbitrarily complicated systems.  The limits on our ability are purely a reflection of the limits of our ability to apply our current theories, not a limit of the theories themselves.

2.  The limits of our ability to make accurate predictions for complex phenomena is because there is something fundamentally different about complex phenomena than simple phenomena.  There is something fundamentally different about living systems that allow them to somehow transcend the laws that govern non-living ones.  There is something fundamentally different about human minds and consciousness that allows them to transcend the laws that govern other entities.

Which of these is more likely to be correct?  We don't know for sure, and we will not know for sure until we have a complete theory of the brain and consciousness, which we currently don't.  But there are some clues nonetheless.

To wit: there are complex non-living systems for which we cannot make very good predictions.  The canonical example of this is weather.  We can predict the movements of planets with exquisite accuracy many, many years in advance.  We can't predict the weather very accurately beyond a few days, and sometimes not even that.

It was once believed that the weather was capricious for the same reason that people can be: because the weather was controlled by the gods, who were very much like people but with super-powers.  Nowadays we know this isn't true.  The reason the weather is unpredictable is not because it is controlled by the gods, but because of a phenomenon called chaos, which is pretty well understood.  I'll have a lot more to say about chaos theory later in this series, but for now I'll just tell you that we know why we can't predict the weather.  It's not because there are gods operating behind the scenes, it is that there are certain kinds of systems that are just inherently impossible to make accurate predictions about even with unlimited resources.  Nature itself places limits on our ability to predict things.  It is unfortunate, but that's just the Way It Is.

So our inability to make accurate predictions about living systems and human consciousness is not necessarily an indication that these phenomena are somehow fundamentally different from non-living systems.  It might simply be due to their complexity.  We don't have proof of that, of course, but so far no one has found any evidence to the contrary: no one has found anything that happens in a living system or in a human brain that can't be explained by our current best theories of non-living systems.  How can I know that?  Because if anyone found any such evidence it would be Big News, and there hasn't been any such Big News, at least not that I've found, and I've looked pretty diligently.

Because of the fact that, as far as we can tell, our current-best theories of simple non-living systems can, at least in principle, explain everything that happens in more complex systems, we can arrange our current-best theories in a sort of hierarchy, with theories of non-living systems at the bottom, and theories of living systems built on top of those.  It goes like this: at the bottom of the hierarchy are two theories of fundamental physics: general relativity (GR) and something called the Standard Model, which is built on top of something called Quantum Field Theory (QFT), which is a generalization of Quantum Mechanics (QM) which includes (parts of) relativity.  The details don't really matter.  What matters is that, as far as we can tell, the Standard Model accurately predicts the behavior of all matter, at least in our solar system.  (There is evidence of something called "dark matter" out there in the universe which we don't yet fully understand, but no evidence that it has any effect on any experiment we can conduct here on earth.)

The Standard Model describes, among other things, how atoms are formed.  Atoms, you may have learned in high school, are what all matter is made of, at least here on earth.  To quote Richard Feynman, atoms are "little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another."  Atoms come in exactly 92 varieties that occur in nature, and a handful of others that can be made in nuclear reactors.

(Exercise for the reader: how can it be that atoms "move around in perpetual motion" when I told you earlier that it is impossible to build a perpetual motion machine?)

The details of how atoms repel and attract each other is the subject of an entire field of study called chemistry.  Then there is a branch of chemistry called organic chemistry, and a sub-branch of organic chemistry called biochemistry which concerns itself exclusively with the chemical reactions that take place inside living systems.

Proceeding from there, biochemistry is a branch of biology, which is the study of living systems in general.  The foundation of biology is the observation that the defining characteristic of living systems is that they make copies of themselves, but that these copies are not always identical to the original.  Because of this variation, some copies will be better at making copies than others, and so you will end up with more of the former and less of the latter.  It turns out that there is no one best strategy for making copies.  Different strategies work better in different environments, and so you end up with a huge variety of different self-replicating systems, each specialized for a different environment.  This is Darwin's theory of evolution, and it is the foundation of modern biology.

Here I need to point out one extant Problem in modern science, something that has not yet been adequately explained.  There is no doubt that once this process of replication and variation gets started that it is adequate to account for all life on earth.  But that leaves a very important unanswered question: how did this process start?  The honest answer at the moment is that we don't yet know.  It's possible that we will never know.  But people are working on it, and making (what seems to me like) pretty good progress towards an answer.  One thing is certain, though: if it turns out that the answer involves something other than chemistry, something beyond the ways in which atoms are already known to interact with each other, that will be Big News.

Beyond biology we have psychology and sociology, which are the study of the behavior of a particular biological system: human brains.  Studying them is very challenging for a whole host of reasons beyond the fact that they are the most complex things known to exist in our universe.  But even here progress is being made at a pretty significant pace.  Just over the last 100 years or so our understanding of how brains work has grown dramatically.  Again, there is no evidence that there is anything going on inside a human brain that cannot be accounted for by the known ways in which atoms interact with each other.

Note that when I say "the known ways in which atoms interact with each other" I am including the predictions of quantum field theory.  It is an open question whether quantum theory is needed to explain what brains do, or if they can be fully understood in purely classical terms.  Personally, I am on Team Classical, but Roger Penrose, who is no intellectual slouch, is the quarterback of Team Quantum and I would not bet my life savings against him.  I will say, however, that if Penrose turns out to be right, it will be (and you can probably anticipate this by now) Big News.  It is also important to note that no non-crackpot believes that there is any evidence of anything going on inside human brains that is contrary to the predictions of the Standard Model.

Speaking of the Standard Model, there is another branch of science called nuclear physics that concerns itself with what happens in atomic nuclei.  For our purposes here we can mostly ignore this, except to note that it's a thing.  There is one and only one fact about nuclear physics that will ever matter to you unless you make a career out of it: some atoms are radioactive.  Some are more radioactive than others.  If you have a collection of radioactive atoms then after a certain period of time the level of radioactivity will drop by half, and this time is determined entirely by the kind of atoms you are dealing with.  This time is called the "half-life" and there is no known way to change it.  In general, the shorter the half life, the more radioactive that particular flavor of atom is.  Half lives of different kinds of atoms range from tiny fractions of a second to billions of years.  The most common radioactive atom, Uranium 238, has a half life of just under four and a half billion years, which just happens by sheer coincidence to be almost exactly the same as the age of the earth.

There is another foundational theory that doesn't quite fit neatly into this hierarchy, and that is classical mechanics.  This is a broad term that covers all of the theories that were considered the current-best-explanations before about 1900.  It includes things like Newton's laws (sometimes referred to as Newtonian Mechanics), thermodynamics, and electromagnetism.

The reason classical mechanics doesn't fit neatly into the hierarchy is because it is known to be wrong: some of the predictions it makes are at odds with observation.  So why don't we just get rid of it?

Three reasons: first, classical mechanics makes correct predictions under a broad range of circumstances that commonly pertain here on earth.  Second, the math is a lot easier.  And third and most important, we know the exact circumstances under which classical mechanics works: it works when you have a large number of atoms, they are moving slowly (relative to the speed of light), and their temperature is not too cold.  If things get too fast or too small or too cold, you start to see the effects of relativity and quantum mechanics.  But as long as you are dealing with most situations in everyday life you can safely ignore those and use the simpler approximations.

This, by the way, is the reason for including Step 2 in the Scientific Method.  As long as you are explicit about the simplifying assumptions you are making, and you are sure that those simplifying assumptions actually hold, then you can confidently use a simplified theory and still get accurate predictions out of it.  This happens all the time.  You will often hear people speak of "first order approximations" or "second-order approximations".  These are technical terms having to do with some mathematical details that I'm not going to get into here.  The point is: it is very common practice to produce predictions that are "good enough" for some purpose and call it a day.

Classical mechanics -- Newton's laws, electromagnetism, and thermodynamics -- turn out to be "good enough" for about 99% of practical purposes here on earth.  The remaining 1% includes things like explaining exactly how semiconductors and superconductors work, why GPS satellites need relativistic corrections to their clocks, and what goes on inside a nuclear reactor.  Unless you are planning to make a career out of these things, you can safely ignore quantum mechanics and relativity.

And here is more good news: classical mechanics is actually pretty easy to understand, at least conceptually.  It's the stuff that is commonly taught in high school science classes, except that there it is usually taught as a fait accompli, without any mention of the centuries of painstaking effort that went into figuring it all out, nor the ongoing work to fill in the remaining gaps in our knowledge.

The reason this matters is that it leaves people with the false impression that science is gospel handed down from on high.  You hear slogans like "trust the science."  You should not "trust the science."  You should apply the scientific method to everything, including the question of what (and who) is and is not trustworthy.  And the most important question you can ask of anyone making any claim is: is this consistent with what I already know about the world?  Or, if this were true, would it be Big News?  And if so, have you seen any other evidence for it elsewhere?

It is important to note that the converse is not true.  If someone makes a claim that would be Big News if it were true but it doesn't seem to have made a splash, the best explanation for that it usually that the claim is simply not true.  But just because a claim does end up being Big News doesn't necessarily mean that it's true!  Cold fusion was Big News when it was first announced, but it ended up being (almost certainly) false nonetheless.  Big News should not be interpreted as "true" but something more like "possibly worthy of further investigation."

Sunday, April 21, 2024

Three Myths About the Scientific Method

This is the third in a series on the scientific method.  This installment is a little bit of a tangent, but I wanted to publish it now because I've gotten tired of having to correct people about these things all the time.  I figured if I just wrote this out once and for all I could just point people here rather than having to repeat myself all the time.

There are a lot of myths and misconceptions about science out there in the world, but these three keep coming up again and again.  These myths are pernicious because they sound plausible.  Even some scientists believe them, or at least choose their words carelessly enough to reinforce them, which is just as bad.  Even I am guilty of this sometimes.  It is an easy trap to fall into, especially when talking about "scientific facts".  So here for the record are three myths about the scientific method, and the corresponding truth (!) about each of them.

Myth #1:  The scientific method relies on induction

Induction is a form of reasoning that assumes that phenomena follow a pattern.  The classic example is looking at a bunch of crows, observing that every one of them is black, and concluding that therefore all crows are black because you've never seen a non-black crow.

It is easy to see that induction doesn't work reliably: it is simply false that all crows are black.  Non-black crows are rare, but they do exist.  So do non-white swans.  Philosophers make a big deal about this, with a lot of ink being spilled discussing the "problem of induction".  It's all a waste of time because science doesn't rely on induction.  Any criticism that anyone levels at science that includes the word "induction" is a red herring.

It's easy to fall into this trap.  The claim that all crows are black, or all swans are white, are wrong, but they're not that wrong.  The vast majority of crows are black, so "all crows are black" is a not-entirely-unreasonable approximation to the truth in this case, so it's tempting to think that induction is the first step in a process that gets tweaked later to arrive at the truth.

The problem is that most inductive conclusions are catastrophically wrong.  Take for example the observation that, as I write this in April of 2024, Joe Biden is President of the United States.  He was also President yesterday, and the day before that, and the day before that, and so on for over 1000 days now.  The inductive conclusion is that Joe Biden will be President tomorrow, and the day after that, and the day after that... forever.  Which is obviously wrong, barring some radical breakthrough in human longevity and the repeal of the 22nd amendment to the U.S. Constitution.  Neither of these is very likely, so we can be very confident that Joe Biden will no longer be President on January 7, 2029, and possibly sooner than that depending on his health and the outcome of the 2024 election.

How do we know these things?  Because we have a theory of what causes someone to become and remain President which predicts that Presidential terms are finite, and that theory turns out to make reliable predictions.  Induction has absolutely nothing to do with it.

Induction has absolutely nothing to do with any scientific theory.  At best it might be a source of ideas for hypotheses to advance, but the actual test of a hypothesis is how well it explains the known data and how reliable its predictions turn out to be.  That's all.

Myth #2:  The scientific method assumes naturalism/materialism/atheism

This is a myth promulgated mainly by religious apologists who want to imply that the scientific bias against supernaturalism is some kind of prejudice, an unfair bias built in to the scientific method by assumption, and that this can blind those who follow the scientific method to deeper truths.

This is false.  The scientific method contains no assumptions whatsoever.  The scientific method is simply that: a method.  It has no more prejudicial assumptions than a recipe for a soufflĂ©.

Even the gold-standard criterion for a scientific theory, namely, its ability to make reliable predictions, is not an assumption.  It is an observation, specifically, it is an observation about the scientific method: it just turns out that if you construct parsimonious explanations that account for all the observed data, those explanations turn out to have more predictive power than anything else humans have ever tried.  That is an observation that, it turns out (!) can also be explained, but that is a very long story, so it will have to wait.

The reason science is naturalistic and atheistic is not because these are prejudices built into the method by fiat, it is because it turns out that the best explanations -- the most parsimonious ones that account for all the known data and have the most predictive power -- are naturalistic.  The supernatural is simply not needed to explain any known phenomena.

Note that this is not at all obvious a priori.  There are a lot of phenomena -- notably the existence of life and human intellect and consciousness -- that don't seem like they would readily yield to naturalistic explanations when you first start to think about them.  But it turns out that they do.  Again, this is a long story whose details will have to wait.  For now I'll just point out that people used to believe that the weather was a phenomenon that could not possibly have a naturalistic explanation.

The reason science is naturalistic is not that it takes naturalism as an assumption, but rather that there is no evidence of anything beyond the natural.  All it would take for science to accept the existence of deities or demons or other supernatural entities is evidence -- some observable phenomenon that could not be parsimoniously explained without them.

Myth #3:  "Science can't prove X" or "scientists got X wrong" is an indication that science is deficient

I often see people say, "Science can't prove X" with the implication that this points out some deficiency in science that only some other thing (usually religion) can fill.  This is a myth for two reasons.  First, science never proves anything; instead it produces explanations of observations.  And second, this failure to prove things is not a bug, it's a feature, because it is not actually possible to prove anything about the real world.  The only things that can actually be proven are mathematical theorems.

Now, you will occasionally hear people speak of "scientific facts" or "the laws of nature" or even "scientific proof".  These people either don't understand how the scientific method actually works, or, more likely, they are just using these phrases as a kind of shorthand for something like "a theory which has been sufficiently well established that the odds of finding experimental evidence to the contrary (within the domain in which the theory is applicable) are practically indistinguishable from zero."  As you can see, being precise about this gets a little wordy.

The scientific method gives us no guidance on how to find good theories, only on how to recognize bad ones: reject any theory that is at odds with observation.  This method has limits.  We are finite beings with finite life spans and so we can only ever gather a finite amount of data.  For any finite amount of data there are an infinite number of theories all consistent with that data, and so we can't reject any of them on the grounds of being inconsistent with observation.  To whittle things down from there we have to rely on heuristics to select the "best explanation" from among the infinite number of possibilities that are consistent with the data.

Again, it just turns out that when we do this, the result of the process generally has a lot of predictive power.  Some of our theories are so good that they have never made a false prediction.  Others do make false predictions, but to find observations that don't fit their predictions you have to go outside of our solar system.  For theories like that we will sometimes say that those theories are "true" or "established scientific facts" or something like that.  But that's just shorthand for, "The best explanation we currently have, one which makes very reliable predictions."  It is always possible that some observation will be made that will falsify a theory no matter how well established it is.

Finding observations that falsify well-established theories does happen on occasion, but it is very, very rare.  The better established a theory is, the rarer it is to find observations that contradict it.  For less-well-established theories, finding contradictory data happens regularly.  This is also often cited, especially by religious apologists, as a deficiency but it's not.  It's how science makes progress.  In fact, the best established theory in the history of science is the Standard Model of particle physics.  We know that the Standard Model is deficient, but not because it makes predictions that are at odds with experiment -- quite the opposite in fact.  The Standard Model has never (as of this writing) made a false prediction since it was finalized in the 1970s.  The reason we know it's deficient is not because it makes false predictions (it doesn't, or at least hasn't yet) but rather because it doesn't include gravity.  We know gravity is a thing, but no one has been able to figure out how to work it in to the Standard Model.  And one of the reasons we haven't been able to do it is because we have no experimental data to give us any hints as to where the Standard Model might be wrong.  This is actually considered a major problem in physics.

That's it, my top three myths about science debunked.  Henceforth anyone who raises any of these in my presence gets a dope slap (or at least a reference to this blog post).

Monday, April 01, 2024

Feynman, bullies, and invisible pink unicorns

This is the second installment in what I hope will turn out to be a long series about the scientific method.  In this segment I want to give three examples of how the scientific method, which I described in the first installment, can be applied to situations that are not usually considered "science-y".  By doing this I hope to show you how the scientific method can be used without any special training, without any math, but to nonetheless solve real problems.

Example 1

In my inaugural blog post twenty years ago I wrote:

The central tenet of science in which I choose to place my faith is that experiment is the ultimate arbiter of truth. Any idea that is not consistent with experimental evidence must be wrong.

This was an adaptation of Richard Feynman's definition of science, given in the opening paragraphs of the first chapter of his Lectures on Physics.  Note that Feynman did not write the Lectures.  The Feynman Lectures were not written as a book, they are transcripts of lectures that Feynman gave while teaching an introductory physics course at Caltech in the early 1960s.  These lectures were recorded, and it is worth listening to a few of them to get a feel for that the original source material sounds like.

It is worth reading (or listening to) Feynman's introduction in its entirety.  It is only nine paragraphs, or nine minutes.

If you read the transcript you will see this:

The principle of science, the definition, almost, is the following: The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth.”

Note that the word "truth" is in quotes.  Why?  One possibility is that these are "scare quotes", an indication that the word "truth" is being used in an "in an ironic, referential, or otherwise non-standard sense."  This matters because it materially changes the meaning of what Feynman is saying here.  Without the scare quotes, the passage implies that there exists a transcendent metaphysical Truth with a capital T and that science uncovers this Truth.  If that is what Feynman intended, then this would contradict what I said in the first installment, that science converges towards *something*, but that something may or may not be metaphysical Truth.

You might be tempted to argue that there is no way that I -- or anyone else for that matter -- could possibly know what Feynman actually meant, but that is not true.  We can.  How?  By going back to the original source material: there is a recording of Feynman actually speaking those words.  If you listen to it, you will find that the transcript is actually not a word-for-word transcription of what Feynman said.  Here is what he actually said, word-for-word:

Experiment is the sole judge of truth, with quotation marks...

and he goes on from there say some other things that are not included in the transcript.  I'm not going to attempt to transcribe them because there are a lot of clues regarding his intent in his cadence and tone of voice which I cannot render as text.  But one thing should be clear: the use of scare quotes in the transcript is justified because Feynman specifically said so.

Does this prove that this is what Feynman meant?  No.  Nothing in science is ever proven.  It's possible that Feynman, because he was speaking off-the-cuff, said something he didn't intend.  It's possible that he was under the influence of alien mind-control technology.  It's possible that Richard Feynman never actually existed, that he was a myth, and all of the evidence of his existence is actually the product of a vast conspiracy.  But if you think that any of these possibilities are likely enough to pursue, well, good luck to you because I predict you're going to be wasting a lot of time.

Discussion

I'm going to break down the previous example in some painstaking detail to show how it is an instance of the process I described before.

1.  Identify a Problem.  Recall that a Problem is a discrepancy between your background knowledge and something you observe.  In this case, the discrepancy was the use of scare quotes in the printed version of the Feynman lectures, and the background knowledge that this is a transcript of something Feynman said rather than something that he wrote.

2.  Make a list of simplifying assumptions.  In this case there weren't any worth mentioning.

3.  Try to come up with a plausible hypothesis.  In this case there were two: one was that this was somehow a faithful rendering of what Feynman intended, and the other was that this was an editorial embellishment inserted by whoever produced the transcript.

4.  Subject your hypotheses to criticism.  I skipped that step because this is just a trivial example and not worth asking other people to spend any time on.

5.  Adjust your hypotheses according to the results of step 4.  Not applicable here.

6.  Do an experiment to try to falsify one or more of your hypotheses.  In this case, we had the original audio recording, and so we could go back to the source to hear what Feynman actually said.  And it turned out in this case that this new data actually falsified *both* of our initial hypotheses.  The transcript is *neither* a verbatim rendering of what Feynman said, *nor* is it an editorial embellishment by the transcriber.  Instead, it is a faithful rendering of Feynman's stated intentions, indeed arguably more faithful than a verbatim transcript would have been because (and note that here I am once again engaging in a tiny little example of applying the scientific method in a very abbreviated way) he had to work around a limitation of the medium he was using, namely, speech, which has no way of explicitly rendering punctuation.

7.  Use your theory to make more predictions.  I skipped that step here too.

Example 2

The second example comes from a real incident from when I was in elementary school.  My family emigrated from Germany to Lexington, Kentucky, in the late 60s.  My parents were secular Jews.  I spoke virtually no English.  As you might imagine in a situation like that, I was not exactly the most popular kid in school.  I got bullied.  A lot.  It went on for five years until we moved to Oak Ridge, Tennessee, at which point I was looking forward to making a fresh start.  I was no longer obviously a foreigner.  I spoke fluent English.  I was familiar with the culture (or so I thought).  I would not have my reputation as a punching bag following me around.  So I was rather dismayed when, within a few months in my new home, I was once again being bullied.

Here was a Problem.  I had a theory: I was being bullied in Lexington because I was a foreigner, and the culture wasn't welcoming to foreigners, especially not German Jews, who were just half a notch above blacks in the social pecking order.  But in Oak Ridge it was not obvious I was a foreigner.  I spoke unaccented English, I was white, I never went to synagogue or did anything else to identify myself as a Jew.  So why was I still being picked on?

To make a very, very long story short, I began to consider the possibility that my original hypothesis was fundamentally wrong, and that the reason I was being picked on had nothing to do with what I was but rather with something I was doing, and that I was engaging in the same provocative behavior (whatever that might be) in Oak Ridge as I had in Lexington.  In retrospect this was, of course, the right answer, but it took me a very long time to figure it out.  It's hard enough to think straight when you are being bullied all the time, and it's even harder when you are in the emotional throes of adolescence and puberty.  But I eventually did manage to figure out that the reason I was being bullied was quite simply that I was behaving like a jerk.  When I stopped acting like a jerk, the bullying stopped.  Not right away, of course.  Like I said, it took a very, very long time, and I'm leaving out a lot of painful details.  But I eventually did manage to figure it out and become one of the cool kids (or at least one of the cool nerds).

The point of this story is that I solved a real-world social problem using the scientific method without even realizing that I was doing it.  This happened in junior high school.  I didn't have the foggiest clue about the scientific method, hadn't even encountered it in science classes, and even if I had, the idea that it would be applicable to something besides chemistry experiments would have been laughable.  It is only in retrospect that I realized that this is what I had done.  And by coming to that realization, I have since been able to do the same thing deliberately in my day-to-day life to great effect.  I think anyone can do this, especially with a little coaching, which is one of my motivations for putting the effort into writing all this stuff.

Example 3

My third example comes from philosophy, and I'm putting it in here because it's kind of fun, but also because it actually turns out to be a generally useful guide for spotting certain kinds of invalid arguments.  The Problem we are going to address is: how did the universe come into existence?  (This qualifies as a Problem because the universe obviously does exist, and so it must have somehow come into existence, but we don't know how.)

The standard scientific answer is that we don't know.  Something happened about 13 billion years ago that caused the Big Bang (which is more appropriately called the Everywhere Stretch, but that's another story for another time) but we have no idea what that something is.  Religious apologists are quick to seize on this gap in scientific knowledge as an argument for God, but that is not what I want to talk about here.  (I promise I'll come back to it in a future installment.)  Instead, I want to explore a different hypothesis, one which is obviously ridiculous, and talk about how we can reject this argument in a more principled way than to point to its obvious ridiculousness.

The hypothesis goes by the name of Last Thursday-ism.  The hypothesis states that the universe was created last Thursday in the exact state it was then in.  Before that, nothing existed.  The reason you might think otherwise is that you were created with all your memories intact to give you the illusion that something existed before last Thursday when in fact it did not.

Like I said, obviously -- indeed, intentionally -- ridiculous.  But just because something is obviously ridiculous doesn't necessarily mean it's wrong.  Quantum mechanics seems obviously ridiculous too when you first encounter it, and it actually turns out to be right.  So being obviously ridiculous is not a sound reason for rejecting a hypothesis.

Can you think of a more principled argument for rejecting last-Thursday-ism?  Seriously, stop and try before you read on.  Remember that last-Thursday-ism is, by design, consistent with all currently observed data.

You might be tempted to say that last-Thursday-ism can be rejected on the grounds that it is unfalsifiable, but all it takes to fix that is a minor tweak: last-Thursday-ism predicts that if you build just the right kind of apparatus it will produce as output the date of the creation of the universe, and so the output of this apparatus will, of course, be last Thursday (assuming you get it built before next Thursday).  The cost of this apparatus is $100M (which is a bargain if you compare it to what the particle physicists are asking for nowadays).

Here's a hint: consider an alternative hypothesis which I will call the last-Tuesday hypothesis.  The last-Tuesday hypothesis states (as you might guess) that the universe was created last Tuesday.  Before that, nothing existed.  The reason you think it did is that you were created with all your memories intact to give you the illusion that something existed before last Tuesday when in fact it did not.

You could, of course, substitute any date.  Last Monday.  November 11, 1955.  Whatever.  Last-Thursday-ism is not one hypothesis, it is one of a vast family of hypotheses, one for each instance in time in the past.  And at most one of that vast family can possibly be right.  All the others must be wrong.  So unless there is some way to tell a priori which one is right, the odds of any particular one of them, including last-Thursday, being the right one are vanishingly small.  And that is why we are justified in rejecting the last-X hypothesis for any particular value of X.

Note that this is true even if the prediction made by the tweaked version of last-Thursday-ism turns out to be true!  It might very well be that if we build the apparatus described above that it might very well output "last Thursday".  But this will almost certainly not be because last-Thursday-ism is true (because it almost certainly isn't), but for some other reason, like that the apparatus just happens to be a design for a printer that prints out "last Thursday", and this has absolutely nothing to do with when the universe was created.

Invisible Pink Unicorns

That last example may have seemed like a silly detour, but you will be amazed at how often hypotheses that are essentially equivalent to last-Thursday-ism get advanced.  I call these "invisible pink unicorn" hypotheses, or IPUs, because the canonical example is that there is an invisible pink unicorn in the room with you right now.  The only reason you can't see it is that -- duh! -- it's invisible.  This hypothesis can be rejected on the same grounds as last-Thursday-ism. Why pink?  Why not green?  Or brown?  Or mauve?  Why a unicorn?  Why not an elephant?  Or a gryphon?  Or a centaur?  Unless you have some evidence to make one of these variations more likely than the others, they can call be rejected on the grounds that even if one of them were correct, the odds that we will choose it from among all the alternatives are indistinguishable from zero.

IPUs are everywhere, especially among religious apologists.  The cosmological argument, the fine-tuning argument, the ontological argument, etc. etc. etc. -- pretty much any argument of the form, "We cannot imagine how our present state of existence could possibly have arisen by natural processes (that is the Problem) therefore God must exist."  But "the universe was created by God" is just one of a vast family of indistinguishable hypothesis:  We cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore Brahma must exist.    We cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore Mkuru must exist.  And, as long as I'm at it: we cannot imagine how our present state of existence could possibly have arisen by natural processes, therefore an invisible pink unicorn with magical powers to create universes must exist.

Note that this is in no way proves that God -- or Brahma or Mkuru or the Invisible Pink Unicorn -- do not exist.  It is only meant to show why certain kinds of arguments that are often invoked in favor of their existence are not valid, at least not from a scientific point of view.

This sin is by no means unique to religious apologists.  Even professional scientists will advance IPU hypotheses.  This happens more often than one would like.  String theory is the most notable example.  It is an almost textbook example of an IPU.  String theory is not a single theory, it is literally a whole family of theories, all of which are indistinguishable based on currently available data.  Some string theorists will argue (indeed have argued) that string theory can be tested by building yet another particle accelerator for the low, low price of a few billion dollars, and maybe it can.  I don't pretend to understand string theory.  But the overt similarity with last-Thursday-ism should make anyone cast a very jaundiced eye on the claims being made despite the fact that the people making them aren't crackpots.  Having scientific credentials doesn't necessarily mean that you actually understand or practice the scientific method.

[NOTE] The "read more" link below doesn't lead anywhere.  It's there because at some point I accidentally inserted a jump break at the end of this article and now I can't figure out how to get rid of it.  AFAICT it's a bug in the Blogger editor.  If anyone knows how to get rid of this damn thing please let me know.