We are now finally ready to tackle three of the thorniest topics the human intellect has ever grappled with, the concepts of information, knowledge, and belief. The relevance of these concepts to the scientific search for God should be obvious, but I want to be explicit about it because, as ever in this series, we're going to apply the scientific method. That always begins with the identification of a Problem. The Problem that is going to motivate our inquiry into information is the observation that our DNA appears to contain information, so we have to explain where that information came from. Information generally seems to have its origins in some kind of intelligent agent, and so perhaps an intelligent agent is necessary to produce information. If so there must be some kind of intelligent agent behind our DNA, and that agent might be God.
Likewise knowledge and belief also seem to have something to do with God. You will often hear people say, "I believe in God", or "I believe in science", or "I know mommy is in heaven" or "I know there is a chair in the storage room." We are going to try to construct a theory that explains these and many more observations about how people use the words "knowledge", "belief", and "information" in much the same way that we constructed a theory to explain our observations about how people use the word "chair."
Let's start with information because that is the least controversial, and there is actually an established scientific theory that explains it. There can be little doubt that the word "information" refers to something real. We live in the information age. Books and computers store information. Human activities create new information in the form of books and tweets and blog posts and research papers. Information can be destroyed if your hard drive breaks and you don't have a backup. Information can be copied and transmitted from place to place. But what exactly is this stuff that is being created and destroyed and moved around? What is it made of? Is human intelligence required in order to produce it, or can it be created by some purely mechanistic process?
As a first cut we might guess that information is made of atoms, because everything is mode of atoms. But this fails to explain some of our observations. In particular, it fails to explain how information moves from one place to another. To move a material object from one place to another you have to move the atoms that comprise that object. And of course it is possible to move information this way too. When a material object containing information (like a book or a thumb drive) moves from one place to another, the information it contains moves with it. But it is possible to move the information contained in a material object without moving the object that contains it. This is happening right now as you read this article. Information is moving from a web server into your browser, and onto your computer screen, and into your eyes, and into your brain. Before that, the information moved out of my brain and into my laptop, and from there (eventually) to the server. But there are no atoms moving between these various locations, only light and electrical signals.
There is another important difference between how information can move from place to place and how material objects move. A given material object can only be in one location at a time. If you move a chair from A to B then at the end of that process the chair is no longer at A. But you can move information from A to B and at the end of that process the information can be in both locations at the same time. Not only that, but in some situations it is possible to make copies that are so good that you can't tell which one is the copy and which is the original. (This is actually possible with material objects too. Modern manufacturing processes can produce material objects that are for all practical purposes indistinguishable from one another. When we get to quantum mechanics we will encounter objects that are indistinguishable not just for practical purposes but totally indistinguishable by any possible experiment. Atoms are actually examples of such objects. This will turn out to have truly profound implications.)
There is a final observation we can make that provides the vital clue about what information actually is: the same material object can contain different information at different times. Again, your computer screen is the perfect example of this. Right now, the screen contains some information. Scroll, or go to a different web site or application, and your screen will contain different information even though it still contains all the same atoms as before.
You might want to pause and ponder before you go on to the next paragraph. See if you can come up with a theory of information that explains all of these observations. Here's a hint (massive spoiler alert): there is a reason that I'm talking about information immediately after introducing the concepts of systems and states.
The answer is that information is not a system, not a Thing. Information is a state. But it is a very special kind of state. Not all states contain information. If you turn your computer monitor off, then it will no longer contain any information (or at least a lot less that it does right now). But how can we quantify this? How do we distinguish information-containing states from states that don't contain information, or states that contain more information from states that contain less?
These questions were answered in the 1940s by Claude Shannon, who is one of the more famous scientists in history (so if you didn't figure out the answer to what information is, don't feel too bad, it really is a hard problem). To help you better understand the answer I want to start by pointing out another feature that distinguishes information from other kinds of states: information is invariably about something. The news is information about current events. History is information about the past. Your eyes give you information about your surroundings. And so on and so on. Information is always a relation between a thing that contains information and another thing that is the object of that information.
In fact, this turns out to be the distinguishing feature of information, and it is what allows us to formally quantify the amount of information contained in a state. The more that the state of one system constrains the state of another, the more information is contained there.
This is all best illustrated with a simple example: consider a light switch. It can be in one of two positions, on or off. In the case of an old-school mechanical switch, these states are distinguished by the actual physical location of atoms. A mechanical switch has two pieces of metal inside called contacts. If there is space between the contacts, the switch is off. If not, if the contacts are touching each other, the switch is on. The light that the switch controls can likewise be in one of two states, which we also call on and off, even though these are radically different in character fro the state of the switch. "On" means there is light being emitted, and "off" means there isn't. The light contains information about the state of the switch. If the light is in the on state, then so is the switch. If the light is off, then so is the switch.
The quantity of information is the extent to which knowing the state of one system allows you to narrow down the possible states of another. In the case of our light switch, both the switch and the light can be in one of two possible states. If we know the state of either the switch or the light then we can narrow down the state of the other from 2 possibilities down to 1. We typically express this quantity as a logarithm, specifically the base-2 logarithm, and the result is the familiar unit called a bit. The base-2 logarithm of 2 is 1, so the switch and the light each contain 1 bit of information. A base pair in a DNA molecule can be one of four possible bases, so every base pair contains 2 bits of information.
Notice that when we say that a switch or a light has two states we have ignored a lot of details. The actual physical state of a switch or a light includes a lot more than just whether it is on or off. For starters, there is the actual physical location of the light or the switch. A switch can be mounted on a wall, or it can be part of a lamp, or connected directly to some wires and not be mounted on anything at all. But these kinds of details don't matter for the aspect of a light switch's behavior that we actually care about. Sometimes the actual location of a Thing contains information that we care about -- think of a lighthouse or a "Do Not Disturb" sign. But even here, these things contain information by virtue of their location being correlated with some other state. In the case of a light house, its location is correlated with nearby navigation hazards. In the case of a do-not-disturb sign, its location (outside or inside the door) is correlated with someone's desire not to be disturbed.
The reason DNA can be said to contain information is that the sequences of base pairs in a DNA molecule correlates with the amino acid sequences -- and hence the shapes, and hence the functional properties -- of proteins. We are nowhere near ready to actually get into biology. I just wanted to mention that to show that this definition of information applies (or at least that it's plausible).
Note that creating information does not require intelligence. Any physical process that causes the states of two systems to become correlated creates information. There is an old joke about using a rock tied to a string as a weather station. If the rock is wet, it's raining. If the rock is moving, it's windy. If the rock is warm, it's sunny. This sounds funny, but it is actually true. The rock really does contain information about the weather. If you doubt this, consider that you can play the role of the rock. If you stand outside, then your physical state will be correlated with the weather. If it's raining, you will get wet. If it's windy, your hair will get blown around. If it is sunny, you will get warm. And then your sensory nerves will transmit that information to your brain, where that information gets turned into knowledge about the weather.
The reason people think that information requires intelligence is that they conflate information and knowledge. They are, of course, related, but they are not identical. A rock or a light switch or a book or a computer monitor can contain information, but it seems a stretch to say that a light switch "knows" whether or not a light is on. When you read a book or a blog post, information is transferred from the text into your brain, but that may or may not produce knowledge. If you look at text written in a foreign language, the information in that text is still transferred to your brain -- you see exactly the same letter shapes as someone who does understand the language. You have exactly as much information about the state of the system. What you lack is a way to attach meaning to that information, to relate that information to anything other than the state of the book.
It's not just text. A few years ago I was traveling in Africa and found a snake in our room. That snake turned out to be a black mamba, one of the deadliest snakes in the world. But I didn't know that until I got a guide to look at it. I had all the information about the snake -- how big it was, what color it was -- but I didn't know it was dangerous until someone told me.
Knowledge is more than information, more than just a simple correlation between physical states. Books contain information, but they don't know things. Likewise, a DNA molecule contains information (about how to make a protein) but it would be weird to say (except perhaps as a metaphor) that the DNA molecule knows how to make a protein.
In fact, it's not hard to show that knowledge requires consciousness. Consider a situation where you have momentarily forgotten where you left your car keys. In that moment it is fair to say that you do not know where your car keys are. But the information about where they are must still be lurking inside your brain somewhere, otherwise you wouldn't be able to recover the memory and find your keys.
Information is objective. Knowledge is subjective.
What about belief? Philosophers argue about this a lot. The commonly accepted definition of knowledge among philosophers is "justified true belief". In other words, knowledge is a kind of belief, one which cannot be false. You can believe false things, but you can't know false things.
There are lots of problems with this definition. The most well-known one is that the requirement to be "justified" doesn't work. That requirement is there to prevent lucky guesses to qualify as knowledge. To be considered knowledge, a belief has to not only be right, but it has to have a good reason why it's right. For example, if someone says, "I know my team will win the game tomorrow" that doesn't really count as knowledge even if their team actually does win unless they can explain how they know this. (Maybe the fix was in!)
But it turns out that justification is not enough. Not all justifications are "valid" for transforming belief into knowledge. For example, imagine you are walking in the desert searching for water. You see what looks like water in the distance. It is actually a mirage, but you don't know that. It looks like water to you, but it isn't. However, by pure coincidence, there is also a well in the same location where you see the mirage. So your belief that there is water in the distance is actually true, but the reason you believe it is false. So does this count as knowledge? (This is called the Gettier problem, after Edmund Gettier, the philosopher who first pointed it out.)
But I want to point out a much more serious problem with the usual definition: it is circular! In order to determine whether a belief is knowledge you have to determine whether or not it is true. How do you do that? Unless you know that the belief is true you can't know whether or not it is knowledge. The whole point of distinguishing between knowledge and belief is that beliefs can be false, and we want to discharge that uncertainty. But merely knowing is not enough. If we believe something, that belief might be knowledge on the usual definition, but we aren't content with the mere possibility of knowing. We yearn for certainty. We want to not only know, we want to know that we know! And now we are in an infinite regress. As I've already pointed out many times, we can never be certain that reality is real, that we are not living in a simulation, and so we can never be certain that the Objective Reality Hypothesis is true, and so we can never be certain that anything we believe about reality is true. The best we can do is to say that the Objective Reality Hypothesis is the best explanation for our subjective experience that there are chairs, and that everyone agrees that there are chairs.
Even pure mathematical truths fall to this problem. Many people believe that there are Platonic truths that can be known completely independent of any observation. Math and logic are usually cited as examples of this. But this too is false. Even as obvious a "truth" as 1+1=2 is actually not at all obvious. In the early 20th century, Alfred North Whitehead and Bertrand Russell published a monumental three-volume work called "Principia Mathematica" which was intended to put all of mathematics on a solid Platonic foundation. It famously takes 300 pages to prove that 1+1=2.
This is probably the only reason anyone remembers Principia Mathematica. No one actually reads it any more because, shortly after it was published, Kurt Gödel showed that what it was trying to do was actually impossible and the entire project was doomed from the start.
Another good example is Euclid's parallel postulate. For 2000 years, Euclid's Elements was the canonical example of mathematical reasoning. It purported to develop all of geometry from five axioms, the last of which said that given a line and a point not on that line you can draw exactly one line through the point that is parallel to the given line. This axiom was awkwardly longer than the other four, and mathematicians tried in vain for 2000 years to prove it was true using only the other four axioms. The reason they failed is because, despite the fact that for 2000 years no one questioned the truth of this axiom, it is not actually true.
There is in fact nothing in the entire history of alleged Platonic truths that has stood the test of time. Again and again, things that were intuitively obvious were shown to be false. Not even the so-called "three laws of thought and logic" stand up to scrutiny. The "law" of the excluded middle is falsified by the Liar Paradox. The "law" of non-contradiction is simple question-begging: the problem with contradictions is that they lead (with a few other innocuous-seeming assumptions) to all propositions being true, but this is only a problem is you assume that there exist propositions that are false, and that requires you to assume that objective reality exists. The "law" of identity falls to the Ship of Theseus problem.
One of the beautiful things about the scientific method is that it allows us to easily cut through all of these philosophical Gordian knots simply by asking: is knowledge real? Do we actually need it in order to explain any observations? The answer is, no, we don't. Remember the definition of the scientific method: Find the best explanation that accounts for all the observed data, and act as if that explanation is correct until you encounter contradictory data or a better explanation. Note that it says nothing about truth. The closest it comes is acting as if an explanation is correct until it is falsified or a better explanation is found. The fact that science converges towards something is an empirical observation, not something built in to the method. We can give a label to "the thing that science appears to converge towards" and call it the truth, but this truth is different from metaphysical Truth. Metaphysical Truth cannot change. Scientific truth can, with new data and better explanations.
Knowledge on this view is simply the conscious awareness of the current best explanation. It is a subjective sensation, not an objective fact. (Indeed, the very existence of objective facts is a hypothesis to explain the subjective sensation that we know things!) Specifically, knowledge is the subjective sensation of certainty. Once we are sufficiently confident in a belief, we call it "knowledge" even though there isn't a sharp distinction between the two, just as there is no sharp distinction between a hypothesis and a theory. Once a hypothesis withstands a certain level of scrutiny, once it has passed a certain number of tests, once we are sufficiently confident in it, we call it a theory or a "fact" even though there is no bright line.
An interesting consequence of this view is that knowledge depends on context. When "knowing" is just shorthand for "believing with very high certainty" then it is possible for people to "know" (i.e. believe with very high certainty) mutually contradictory things simply because they have different subjective experiences. It is possible for someone to know (i.e. believe with very high certainty) that (say) the earth is 6000 years old because everyone they have contact with says so, while at the same time someone else knows (i.e. believes with very high certainty) that the earth is 4 billion years old for the exact same reason.
There is only one thing that you can even potentially know with absolute certainty and that is your own subjective experiences. Everything else you think you know is actually nothing more than things you believe with very high certainty as a result of those experiences.

