Thursday, November 23, 2023

Why I Don't Believe in Jesus

My old friend Publius posted a comment (which he has apparently since deleted) on my earlier essay about why I don't believe in God, saying "the God which Jesus revealed to us is nothing like [the God of the Old Testament]."  Setting aside the fact that Jesus disagreed, I thought it would be worthwhile expanding specifically on why I don't believe in Jesus.

Jesus is certainly nowhere near as odious as the God of the Old Testament, which I will refer to here as YHWH.  A central pillar of mainstream Christian theology is that Jesus and YHWH are the same, modulo some weirdness having to do with the Trinity which I am not going to get into.  Among Christian denominations that I am familiar with (and that is a very long list) Jehovah's Witnesses are the only ones who deny that Jesus and YHWH are the same.  Frankly, I find their arguments compelling, but that is neither here nor there.  The mere fact that God would have left this crucial point open to argument that is one of the reasons I don't believe that Jesus was divine.

But I'm getting way ahead of myself.  Let me start at the beginning.

I grew up in the American South (Kentucky, Tennessee, and Virginia), a child of secular Jews.  With the exception of one three-day period at the end of a YMCA summer camp when I was 12 (that's another story) I've been an atheist all of my life.  But I've also been steeped in Southern Baptism from the age of 5 until I moved to California at 24.  I have always wanted to try to understand why and how people maintain beliefs that to me are so obviously wrong.  Towards that end I've been studying Christianity and the Bible for over 40 years.  For four years I actually ran a Bible study, first at a local church, and then on-line when covid hit.  I am by no means a Biblical scholar, I do this strictly as a hobby.  But I think I know the Bible and Christianity better than the average bear.

I mention this because a lot of Christians are convinced that the only possible reason anyone could be a non-Christian is either ignorance or willful rejection of what they know in their heart of hearts to be true.  I'm writing this essay in part to bear witness to the fact that these people are wrong.  I am not ignorant, and I do not harbor a secret belief in God.  I have come to the conclusion that there are no deities -- indeed there is nothing at all supernatural in this world -- in good faith after long and diligent study.  I might be wrong.  If I am, then I really would like someone to persuade me, because I don't want to be wrong.  I want to know the truth.  But at this point I'm pretty sure I've heard every argument there is and none of them are convincing.

Let's start with the fact that Christianity is not a unified set of beliefs.  It's a real challenge to come up with even a single claim that all people who self-identify as Christian would agree on.  Even the idea that Jesus is God is denied by Jehovah's Witnesses.  These disagreements go all the way back to the dawn of Christianity.  Even Jesus and Paul had different theologies.  But once again I am getting ahead of myself.

In order to try to avoid getting lost in the theological weeds, I am going to critique a specific hypothesis, one which no Christian denomination espouses in its entirety, but which almost all would agree with at least to some extent, even the Witnesses.  That hypothesis is:  Jesus was a physical being who walked the earth in point of actual fact, like Mohammed or Julius Caesar, and unlike, say, Harry Potter or Albus Dumbledore.  The details are debatable, but there was something extraordinary about him.  He was somehow in communion with the supernatural.  He performed miracles, which is to say, things happened when he was around that could not be accounted for by the laws of physics.  He was executed, crucified, by the Roman authorities, but he rose from the dead, and his resurrection matters because it somehow redeemed our sins and gives us a shot at salvation in the afterlife.  Or something like that.  As you will see, the exact details don't really matter.  What matters is that Jesus was somehow special.

The central evidence advanced to support this hypothesis is the Bible, which was written by humans, but is somehow distinguished from other human writings by again being somehow in communion with the supernatural.  The Bible is "the Word of God" or "inspired by God" or has some property that sets it apart from, say, Beowulf or The Iliad.  The Bible may have mythological or metaphorical elements, but it is somehow in contact with actual metaphysical truth in ways that other works of human literature are not.

The authority of the Bible is generally accepted on faith, but there is actually an argument for it which goes something like this: the Bible was written over a period of many hundreds of years by dozens of different authors.  It nonetheless contains a unified message.  In particular, it contains the story of how we were created by God, how we fell from grace by disobeying Him, how we are now as a result separated from Him by sin, and how Jesus came to redeem those sins and reunite us with our creator.  The reason we can be confident that this is true is, among other things, the Bible contains prophecies which have since been fulfilled by verifiable events, which their authors could not possibly have known except through divine revelation.  And many of those prophecies were fulfilled by the life of Jesus as recounted in the Gospels, and we can have confidence in their accuracy because the Gospels were written by four independent eye witnesses: Matthew, Mark, Luke, and John.  In addition we have written testimony from Paul of Tarsus who met the risen Jesus on the road to Damascus.

Taken at face value this argument seems rather compelling.  Jesus is better attested than many historical figures whose actual existence is taken for granted, like Socrates.  Like Jesus, Socrates left no writings of his own.  His life is attested entirely through the writings of witnesses like his student Plato.  And yet no one doubts that Socrates was real, so how could any rational person possibly deny Jesus?

The big difference between Socrates and Jesus, of course, is that Socrates didn't claim to be God.  He didn't perform miracles.  He didn't say that it was necessary to believe in him in order to avoid eternal torment in the afterlife.  So the claims made about Jesus are rather more extraordinary, and the stakes are considerably higher.  If we get the question of Socrates's existence wrong, it doesn't really matter; we're not going to suffer any serious consequences.  In the end it really doesn't much matter is Socrates was real or mythological, just as it doesn't really much matter whether William Shakespeare was a real person or not.  What matters are the ideas, not the man.  But in Jesus's case, it is very much the man that matters.  One of Jesus's core ideas is that belief in Jesus is the key to salvation.  So in Jesus's case it's really important that we get it right.

So with that in mind, let's take a closer look at the Bible.

The Bible is not a single book.  It is an anthology.  The exact number of works collected therein depends on how you count.  The Catholic Bible has 73.  The original King James Bible had 80, but modern versions pare that down to 66.  Whatever the number, the Bible can be divided cleanly into Old and New Testaments.  The former is written mostly in Hebrew with a little bit of Aramaic thrown in, while the latter is written entirely in Greek.  The former was written entirely before the birth of Jesus, while the latter was written entirely after his death.  Jews, Christians and Muslims all accept the Old Testament as gospel (with an asterisk in the case of Muslims) but only Christians and Muslims accept the New Testament.

It is generally agreed even among the religious that the Bible was written by humans.  Believers will of course say that these humans were inspired by God, but no one claims that the Bible was literally written by God Himself.  (By way of contrast, the authorship of the Quran is attributed literally to Allah, with the Prophet Mohammed PBUH being a mere stenographer taking word-for-word dictation directly from the archangel Gabriel.)  The authorship of the Torah, the first five books of the Old Testament, is attributed by tradition to Moses, but it is almost certain that he did not write it, at least not all of it.  For one thing, the Torah contains an account of Moses's death and burial in some unknown place, which seems an unlikely thing for Moses to have written himself.  For another, the Torah contains accounts of things that happened long before Moses was born.

We can actually see pretty easily that the Torah is almost certainly the work of multiple authors.  We need look no further than the first two chapters of Genesis, and in particular, in the abrupt transition in narrative style and content that occurs between the third and fourth verses of chapter 2.  Ge2:3 wraps up one creation narrative, while Ge2:4 starts a new, radically different one written in a completely different style.  God is no longer referred to simply as "God" (Elohim) but rather as "the LORD God" (YHWH Elohim).  Apologists claim that the second story is just an amplification of the first, filling in some details that the first one omitted, but two creation narratives are not logically compatible with each other.  In the first one, animals are created before humans, and male and female humans are created together.  In the second, Adam (he doesn't even have a name in the first story) is created first, then the animals are created in an unsuccessful attempt to find suitable company for Adam, and finally Eve is created as the LORD God's final act of creation.

At best, it seems to me that God should have hired a better copy editor.

In any case, the point is that there is considerable doubt about who wrote the various parts of the Bible, and that makes it harder to assess the truth of the claim that the Bible is the Word of God.  What does that claim even mean?  It clearly cannot mean that God literally wrote the Bible.  At best, it means that God somehow guided the process of the Bible's creation over many centuries to make it credible.  But the details of that process have been lost in the mists of time.  We have no idea who wrote most of the Bible.  We have no idea who curated the works that comprise it.  We have no way to assess the credibility and qualifications of the people who did this work because for the most part we have no idea who they were.

This is a serious problem because even if God were real and even if he guided the production of the Bible, how can we be confident that some mistakes didn't sneak in somewhere along the line?  Consider, for example, Leviticus 20:13 and Numbers 15:32-35.  These passages say (or at least strongly imply) that homosexuality and working on the Sabbath should both be capital crimes.  Is that really the Will of God, or is it perhaps something that some unknown author living in a very different time and culture sincerely believed to be the Will of God, even though author was actually mistaken?  How can we possibly know without the ability to trace these ideas back to their roots?

The New Testament has many of the same problems.  It is more recent and so we know a lot more about its authorship than we do about the Old Testament, but there are still only 13 (out of 27) books whose author is named in the work.  All of those authorship claims are the same: the apostle Paul.  There is some dispute over whether some of Paul's works are forgeries, but that is neither here nor there.  What matters is that all of the rest of the New Testament is anonymous.  The Gospels in particular, despite being attributed by tradition to Matthew, Mark, Luke and John, are actually anonymous works.  And it's actually pretty clear that whoever wrote them, it was not the traditionally attributed authors.  The Gospels of Matthew and John, for example, refers to to their respective putative authors in the third person, which would be a little bit weird if Matthew and John were the authors.

Christians commonly argue that the Gospels are reliable because they are four independent eyewitness accounts of the events they recount, but this is not true.  They are neither independent nor eyewitness accounts.  Matthew and Luke clearly copied from Mark, and we have no idea whether or not they are eyewitness accounts because we have no idea who wrote them.  In fact, Luke specifically denies being an eyewitness, saying instead that he is writing "a declaration of those things which are most surely believed among us" and not, say, "those things which I beheld while sojourning in Judea."  In this regard I am happy to take the author of Luke at his word and accept that the Gospel of Luke is an accurate record of those things which were "most surely believed among his peers."  That says absolutely nothing about whether or not those things are actually true.

The gospels are also not internally consistent.  Matthew and Luke, for example, present radically different genealogies tracing Jesus's descent from David.  I've heard apologists explain this by saying that they were skipping generations, but Matthew denies this, specifically citing the number of generations between three key events in his timeline so you can easily see that there cannot be any unaccounted-for gaps.  There are similar irreconcilable inconsistencies in the various accounts of the discovery of the empty tomb.

Apart from logical inconsistencies, there are also a lot of events described there that just seem mighty hinky to me.  For example, Matthew (27:50-53) says:

"Jesus, when he had cried again with a loud voice, yielded up the ghost.  And, behold, the veil of the temple was rent in twain from the top to the bottom; and the earth did quake, and the rocks rent; And the graves were opened; and many bodies of the saints which slept arose, And came out of the graves after his resurrection, and went into the holy city, and appeared unto many."

There are three things that strike me as odd about this passage.  First, it is recorded nowhere except Matthew.  You would think that if zombies really walked the streets of Jerusalem and "appeared unto many" that someone besides the author of Matthew would have taken the trouble to write it down.  Second, Matthew writes that the bodies of the saints came out of the graves after "his (presumably Jesus's) resurrection", but at this point in the narrative Jesus had not yet been resurrected.  That's not going to happen for another three (or two depending on how you count) days.

But the third peculiarity dwarfs the other two.  Jesus's resurrection is supposed to be the deal-closer, the one miracle that proves definitively that he was in fact (the son of) God.  But if we take Matthew at his word, Jesus's resurrection was not a singular event at all!  Jerusalem was already lousy with formerly dead bodies walking around!  So what exactly is it that makes Jesus's resurrection special?  The whole thing just makes no sense to me on both historical and theological grounds.

Now, none of this proves anything.  One of the things I've learned over the years is that apologists have answers for everything.  But the overriding question for me has always been: why are apologetics even necessary?  If there is a coherent truth behind the story of Jesus, why did God not see to it that it got written down in a way that made it self-evident?

Of course, apologists have answers for everything, and so they have an answer for that too, and the answer (at least the one given by my Southern Baptist peers in my youth) is that God specifically does not want there to be definitive proof of His existence.  He wants you to have faith, to accept Him specifically without proof, even in the face of compelling evidence to the contrary.  Jesus makes this quite explicit in John 20:29:

Jesus saith unto him, Thomas, because thou hast seen me, thou hast believed: blessed are they that have not seen, and yet have believed.

According to Jesus, credulity is a virtue (which, BTW, is at odds with what YHWH said in Deuteronomy 18:21-22).  This idea is deeply ingrained in our society.  Being a "person of faith" is generally considered a good thing.

But there is a fundamental problem with faith: if you're going to have faith, you still have to somehow decide what to have faith in.  If you're going to have faith in a deity you still need to decide which deity.  If you're going to have faith in Jesus you have to decide which of the many different versions of Jesus you're going to follow.  And too you will ultimately have to decide how to translate your faith into action, into policy, at least for yourself, if not for others.  You need to decide, for example, whether it is a sin to be a homosexual or have an abortion or work on the sabbath or eat shellfish.

Faith is not a virtue.  It is an invitation to chaos.

For me, the arguments above are sufficient to at least cast reasonable doubt on Jesus's divinity.  But the clincher is what happens when you arrange the books of the New Testament in the order in which they were written.  The traditional ordering of the NT is not chronological.  Paul's writings are the earliest, and they were written 20-30 years after Jesus's death.  (Not a single word was written about Jesus while he was alive.)  Then comes the gospel of Mark, then Matthew, Luke and Acts, and finally the gospel of John.  (I'm going to set aside Revelation and the non-Pauline epistles here -- things are complicated enough already.)  I'm not going to get into the weeds of how scholars figured this out, but it's pretty obvious that Mark must have been written before Matthew and Luke because the latter contain passages copied from Mark, sometimes word-for-word.  But the historical order is not at all controversial.  Everyone agrees on this.

When you read the NT in chronological order, a very clear pattern emerges.  The earliest writings, Paul's, contain no mention at all of any details about Jesus's life.  There is nothing about Jesus being born in Bethlehem or living in Nazareth, no mention of Jesus performing miracles or even having a ministry.  In fact, Paul never once quotes anything Jesus said while he was alive!  Just about the only historical detail given by Paul is Jesus's trial before Pilate, and even that is in a book whose authorship is disputed and is probably a forgery.

It is not until Mark, written several decades after Jesus died, that you get the first narrative of Jesus's life, but even here many familiar details are missing.  There is no account of the nativity, no Bethlehem, no Annunciation.  The character of Jesus is very different from what we will see later in John.  Jesus is very human, full of existential angst and self-doubt.  He never claims to be God, and he is very clearly not the same as God the Father (14:36, 15:34).  Even his followers never say that he is God, only that he is the son of God.

In Matthew and Luke you get the first mention of Bethlehem and the first genealogies that purport to show that Jesus was descended from David.  This is significant because these are supposed to show the fulfillment of Old Testament prophecies.  But here again we have a problem trying to reconcile these claims with what is known about history.  Luke says that Joseph and Mary traveled from their home in Nazareth to their birthplace in Bethlehem in order to be taxed, and he says that this happened "when Cyrenius was governor of Syria".  Cyrenius (or Quirinius in Latin) is well documented, and his census is a real historical event.

The problem is that Matthew says that Jesus was born "in the days of Herod the king".  And this is not just an offhand reference, Herod plays a significant role in the narrative.  Having heard of Jesus's birth and the prophecy that he would become king of the Jews, Herod orders the killing of all newborns, which forces Mary and Joseph to flee to Egypt in order to save the baby Jesus.

The problem is that Herod died in 4CE, two years before the census of Quirinius.  It is simply not possible for both stories to be true.

My point here is not that there is a contradiction in the Gospels; Biblical contradictions are a dime a dozen and apologists have answers to all of them.  The point is that these stories appear late, almost 50 years after Jesus died.  Before that, there is no mention of any of these details in any Christian writings.

This trend of getting more and more embellishments to the story as time goes by continues in the last gospel to be written, the one traditionally attributed to John.  Here we have a Jesus who is radically different in character than what we find in the synoptics.  All of the self-doubt and existential angst is gone.  John's Jesus is self-assured and claims unambiguously to be God ("I and my Father are one.")  There is no mention of "take this cup away from me" or "not my will, but yours be done" or "Father, why have you forsaken me?"  There is also a whole collection of new miracles which appear nowhere else, including the raising of Lazarus which, again, if that had actually happened you'd think someone would have taken note and written it down sooner (to say nothing of the fact that it makes Jesus's resurrection a lot less noteworthy).

The point is that when you put the New Testament in chronological order you can clearly see a myth developing right before your eyes.  Matthew and Luke put Jesus in Bethlehem not because they had any evidence that he was actually born there (because he almost certainly wasn't born there) but rather because they believed Jesus was the messiah and so he had to have been born there because (they believed) that's where the OT said the messiah would be born.  (There are other places in Matthew where he fills in details like this in order to fulfill what he thinks the OT prophesies but gets it wrong, sometimes to truly comical effect.)

Again, I have to stress that none of this is a slam-dunk.  Apologists have been aware of these problems quite literally for two thousand years and, as I've taken pains to point out, they have answers for everything.  Obviously I don't find their answers compelling; if I did I'd be a Christian.  But they do have them.

My claim is not that my arguments here are correct, only that they are defensible.  But that's enough to make my point, which is simply that I have not arrived at my conclusions capriciously.  I have reached them in good faith after some fairly diligent study and careful consideration of the counter-arguments.  I have not, as some Christians accuse atheists of, "rejected God because I want to sin" or some such nonsense.  I've simply looked at the evidence and the arguments and found them not compelling.  Far more likely, it seems to me, is that the Bible is (mostly) mythology.

Saturday, November 11, 2023

Why I Don't Believe in God

People occasionally ask me why I don't believe in God.  There are a lot of reasons, but I've never bothered to write them down before because most of my reasons are pretty basic and uninteresting: no evidence for God, lots of evidence against the Bible being divinely inspired, yada yada yada.  But there is one argument I've started to articulate lately that I've not seen come up very often, and which no one I've presented it to has been able to give an adequate response to.  (Well, no one has been able to give an adequate response to any of my reasons because if they could I would change my mind!  But this is an argument for which no one has been able to produce any response at all beyond something like, "Well, you can't possibly understand this unless you give yourself over to God."  As you will see, that is a big ask.)

The argument has to do with the story of the Exodus.  Everyone thinks they know this story, just as everyone thinks they know what the Ten Commandments are, but the movie got both wrong.   The popular conception goes something like this: Pharaoh enslaves the Israelites.  God, after mulling it over for countless generations, finally decides to intervene and recruits Moses to be His messenger to demand that Pharaoh "let my people go".  Pharaoh refuses, and so God lets loose a series of plagues on the people of Egypt, culminating in the Passover and the killing of the firstborn, which finally persuades a recalcitrant Pharaoh to accede to God's demand.

But that is not actually the way the story goes.  Pharaoh does not actually decide to refuse of his own free will.  Instead, God hardens Pharaoh's heart and forces him to refuse!  And it actually gets much, much worse than that, but just to make sure that there can be no doubt on this particular score, here is the most unambiguous verse:

Exo9:12 And the LORD hardened the heart of Pharaoh, and he hearkened not unto them; as the LORD had spoken unto Moses.

There are actually two things here that should make you very queasy.  The first, as I have already mentioned, is that it's not Pharaoh making the decision, it's God pulling Pharaoh's strings.  But the second thing is almost worse, which is that it seems as if this was not something that God decided to do in the moment, but actually part of a plan!  And indeed, it was part of a plan:

Exo4:21 And the LORD said unto Moses, When thou goest to return into Egypt, see that thou do all those wonders before Pharaoh, which I have put in thine hand: but I will harden his heart, that he shall not let the people go.  [Emphasis added]

And God reiterates this in chapter 7:

Exo7:3 And I will harden Pharaoh's heart, and multiply my signs and my wonders in the land of Egypt.  [Emphasis added]

In other words, God is going to force Pharaoh to refuse!  And why?  So that God will have an opportunity to show off how bad-ass He can be!

That would be bad enough if God just took it out on Pharaoh, but He doesn't.  All of the Egyptian people suffer despite the fact that most of them probably don't even have clue what is going on, let alone a say in the decision-making.  Egypt is not a democracy.  The proceedings inside Pharaoh's palace are not being streamed live on CNN.  But the plagues come regardless.

And they culminate, of course, in the Killing of the Firstborn, which was also, it turns out, always part of God's Plan:

Exo4:22-23 And thou shalt say unto Pharaoh, Thus saith the LORD, Israel is my son, even my firstborn: And I say unto thee, Let my son go, that he may serve me: and if thou refuse to let him go, behold, I will slay thy son, even thy firstborn.

Of course, everyone focuses on the firstborn of Pharaoh, because it's a lot easier to justify the killing of an innocent child if that child happens to be the son of a hated ruler.  But what about all the others?

Exo11:5 And all the firstborn in the land of Egypt shall die, from the first born of Pharaoh that sitteth upon his throne, even unto the firstborn of the maidservant that is behind the mill; and all the firstborn of beasts.

I can't even begin to imagine the emotional pain that God inflicted on the mothers and fathers of Egypt that day, none of whom had any moral culpability in the enslavement of the Israelites.  Certainly the maidservant that was behind the mill didn't have a say in the matter, but she lost her child nonetheless.

(My sister died three years ago, and it nearly destroyed my mother.  And my sister wasn't even the firstborn.)

These are not the actions of a kind, loving God.  These are the actions of a barbarous psychopathic madman.  A core tenet of Christianity is supposed to be that killing innocents is not justifiable under any circumstances, and yet this is exactly what God did.  And He did it not in service of a higher goal, not to persuade Pharaoh to let the people go (because, as I noted earlier, even Pharaoh didn't actually have a choice) but just to give Himself an opportunity to show off.  It is hard for me to imagine a more evil act.  (And yet God actually manages to top Himself with eternal punishment for non-believers, but that's another story.)

This would be bad enough by itself, but then later, at God's command, the Israelites go on a genocidal spree through Canaan that makes the Killing of the Firstborn look humane by comparison:

Deu2:34 And we took all his cities at that time, and utterly destroyed the men, and the women, and the little ones, of every city, we left none to remain:

Deu3:6 And we utterly destroyed them, as we did unto Sihon king of Heshbon, utterly destroying the men, women, and children, of every city.

Deu20:16-17 But of the cities of these people, which the LORD thy God doth give thee for an inheritance, thou shalt save alive nothing that breatheth: But thou shalt utterly destroy them; namely, the Hittites, and the Amorites, the Canaanites, and the Perizzites, the Hivites, and the Jebusites; as the LORD thy God hath commanded thee:

Josh6:21 And they utterly destroyed all that was in the city, both man and woman, young and old, and ox, and sheep, and ass, with the edge of the sword.

And that's just a small sample.

Apologists will tell you that all this slaughter was justified because the Canaanites (and the Hittites and the Amorites and the Perizzites and the Hivites and the Jebusites) were utterly corrupt and evil and deserved to be destroyed down to the last man, woman, and child.  And what is the evidence that they were so irredeemably corrupt?  They were sacrificing their children to Molech.

Now, I will concede that sacrificing children to Molech is definitely not cool, but there are still two problems here.  First, God demanded a human sacrifice from Abraham, so it is far from clear that God considers human sacrifice to be an unalloyed evil.  At best one could come away with the impression that sacrificing children might be acceptable under some circumstances, like if God demands it (and fails to change His mind at the last minute).  But there is a second, more serious problem: even if we grant (and I am happy to concede this) that sacrificing children is always Really Really Bad, could God not have come up with any better solution to the problem than genocide?  Like, oh I don't know, talking to the Canaanites and telling them that what they are doing is not cool?  Because I'm pretty sure that the Canaanites were not sacrificing their children because they enjoyed it, I think they did it because they had a sincere belief that Molech was real and that sacrificing a few children was necessary in order to avoid an even more fearsome fate from befalling them.

And it must have been only a few children.  The Canaanites could not possibly have been sacrificing all of their children, or they would have gone extinct within one generation.  But God's answer to the problem of the Canaanites killing some of their children is to kill all of the children.  And their parents.  Some of whom were undoubtedly pregnant women.  Sorry, Christians, but you can't have it both ways.  Either killing the unborn is acceptable under some circumstances or it's not.

There are two arguments of last resort that I've had people muster against this.  The first is the potter's-clay response.  The idea is that if a potter makes a pot then he has the moral right to do anything he wants to to that pot, including destroy it.  In this analogy, of course, God is the potter and we are the pots.  The problem with this argument is so obvious that it almost seems condescending to point it out: pots aren't sentient beings.  Humans are.  So even if we were created by God, that does not give Him the moral license to dispose of us however he sees fit.  I believe that sentience entitles one to certain inalienable rights, including the right not to be treated as someone else's property (c.f. Lev25:45-46).

The second response is the one I mentioned at the outset: that I can't possibly hope to understand this until and unless I "give myself over to God" or "submit to God's will" or some such thing.  I honestly have no idea how I would do that, or even what those words could possibly even mean.  But even if I did know, I would be very leery of acting on this advice.  If God exists, and if He really is as described in the Bible, then He is a monster.  He has no moral compass.  Some Christians will actually concede that I'm right about this: God doesn't have a moral compass, God is the moral compass.  OK, fine.  But of what use is a compass that points every which way depending on how the wind is blowing?  Sometimes killing is bad, sometimes it's good, and sometimes it is even obligatory.  How can you tell?  What use is a moral compass that doesn't point in one direction?

My moral compass tells me that I should treat all sentient creatures with some measure of respect and kindness.  That has served me pretty well so far, and so, for now, that's what I'm sticking with.

Tuesday, May 02, 2023

How to explain cardinals vs ordinals to a six-year-old

This discussion on Hacker News on whether infinity is odd or even got me to thinking about the right way to teach kids about infinity, and the difference between cardinals and ordinals.  Here's what I came up with.

It is important to realize that numbers can stand for two different kinds of ideas.  Numbers can talk about "how many" but they can also talk about "what position".  For example, we can talk about how many apples are in a bag of apples, and this lets us compare two bags of apples to decide whether one bag has more apples than the other, or whether the two bags have the same number of apples.  We can also talk about what happens when we add an apple to a bag, or take away an apple from a bag.  And this lets us define what we mean by zero: it is the number of apples in a bag from which it is not possible to remove an apple.

Now consider two bags of apples.  How can we tell if the bags have the same number of apples?   The obvious way is to count them, but suppose we don't know how to count.  Is there another way?  Yes, there is.  (See if you can figure it out.)

The way to do it is to start taking apples out of the bags two at a time, one from each bag, and stop when one of the bags is empty.  If the other bag is also empty, the two bags had the same number of apples to begin with.  That is what it means to have "the same number": for every apple in one bag, there was a corresponding apple in the other bag.

Now, what happens if we start adding apples to a bag and never stop?  It is tempting to say that we would eventually end up with a bag of infinity apples, but this is not true because if we never stop then we never end up with any particular number of apples.  We just have a bag that keeps getting fuller and fuller forever.  Is there a way to define infinity that doesn't require us to wait forever?

Yes, there is.  Remember, we have a way to tell if two bags of apples have the same number of apples (and we can do this without knowing how to count!)  So imagine if we took all possible bags of apples and grouped them together according to how many apples they had.  We take all of the one-apple bags and put them together (maybe we put them in a box instead of a bag) and all of the two-apple bags and put them together (in a second box) and so on and we do this for all possible bags of apples all at the same time.  The number of boxes we would end up with is (one kind of) infinity.

(Aside: it might seem like doing this for "all possible bags of apples at the same time" is cheating.  Why is that any better than talking about where the process of adding apples forever ends up?  It's because "forever" and "ending up" are contradictory.  Doing something to all possible bags at the same time might be physically impossible, but it is not logically contradictory.  The problem with trying to construct infinity by adding apples is that adding apples is inherently sequential.  We can't add the nth apple until after we have added the n-1'th apple.  By postulating "all possible bags of apples" we have taken the infinite bit and "parallelized" it so that the process of constructing the infinite set doesn't have an infinite chain of sequential dependencies, and so we can do it in a finite amount of "time".)

Now, instead of putting apples into bags, let's think instead about putting apples in a row.  This might seem at first like a distinction without a difference, but it's not.  When apples are in a bag, they are all jumbled together and you can't really tell one apple from another (assuming they are all the same kind of apple and the same size).  But if you put them in a row they now have an order associated with them.  So we can talk about the first apple, and the apple after the first apple (which we call the second apple) and the apple after that (third apple) and so on.

We can also go the other way and talk about the apple before (say) the third apple, which is the second apple, and the apple before the second apple, which is the first apple.  This is analogous to how we could talk about one apple more or one apple less.  But there is a huge difference between before and after versus more and less.  When we take apples out of bags, when we get to an empty bag, we have to stop.  There are no more apples to take away.  But with apples-in-rows, if we want the apple before the first apple we don't have to stop.  We can simply add an apple to that end of the row.

There is one little detail that we have to mention, and that is that to make this work we have to somehow mark the first apple so we don't lose track of it.  We could use a sharpie to write a big "1" on it, or use a granny smith as the first apple and make all the others be red-delicious or something like that.  But as long as we have a row with two ends, we can add apples to either end, and so we can go on before-and-aftering for as long as we like.  When we're adding-and-removing we are limited to removing only as many apples as we've added, after which we have to stop.

We have names for after-the-first apples: second, third, and so on.  Can we invent names for before-the-first apples?  Of course we can.  Unfortunately, the names that have been given to before-the-first apples break the pattern.  These should have been called before numbers, but in fact they are called negative numbers, or, less commonly, minus numbers.  This is really misleading because there is no such thing as negative-one apples, but there is such a thing as the-apple-that-is-two-before-the-first.  (Sometimes it seems that mathematicians are conspire to make things as confusing as they possibly can in order to maintain their job security ;-)

Note that what is important here is not so much the actual physical arrangement of apples, but rather that apples-in-a-row have a natural ordering to them which apples-in-bag don't have.  That ordering allows us to assign numbers not just to the total quantity of apples, but to each individual apple to identify where it is in that ordering.  And that very naturally leads us to a whole different kind of number (negative numbers) when we start to think in terms of before-and-after rather than less-and-more.

Note also that we can have an infinite number of after-apples, and that does not stop us from adding before-apples to the row.  In other words, when numbers are taken to stand for the order of things rather than the quantity of things, we get entirely new kinds of numbers as a result, and (and this is the really important bit) we get those additional numbers despite the fact that we started out with an infinite number of numbers!  There are an infinite number of positive numbers, but then there are an infinite number of negative numbers on top of that!

Are there even more kinds of numbers?  Yes!  Imagine an infinite row of apples that goes on forever in both directions.  We can add a new apple to that row by calling it, "The apple after all the after-apples that have (regular) numbers on them."  That's a bit wordy so it's usually abbreviated ω, which is the lower-case Greek letter omega.  (Exercise: what would you call the apple-before-all-the-before-apples-that-have-regular-numbers-on-them?)  Then we can add the apple-after-the-ω'th apple (abbreviated ω+1), the apple after that (ω+2) and so on.  Eventually you get to ω+ω=2ω, then 2ω+1, 2ω+2... 3ω, 3ω+1 and so on in a mind-boggling sequence that eventually gets you to ε0 and then ω1and then the Feferman–Schütte ordinal and the small and large Veblen ordinals.

But that's probably enough for one lesson.  Tomorrow we'll go back to bags of apples and talk about diagonalization.

Sunday, April 23, 2023

All together now: the second amendment must be repealed

It has been two years since I first called for the repeal of the second amendment.  (Someone has to be the first.)  It seems like a complete no-brainer to me that we need to at least say the obvious truth that the second amendment is a relic of the past and has no place in a modern technological society, if for no other reason than to start moving the Overton window for future generations.  It has worked spectacularly well for abortion prohibition, so why would it not work equally well for guns?

At long last someone else has stepped up to the plate.  Kirk Swearingen over at Salon has published a piece aptly entitled "The Second Amendment is a ludicrous historical antique: Time for it to go."  So kudos to Kirk.

Unfortunately, despite the assertive title, he gets a little bit mamby-pamby about it.

We're not supposed to even whisper such things because the NRA and right-wing extremists have sensible Americans — including many gun owners — so bullied and cowed that we feel we are only allowed to hope for sensible gun-safety legislation around the edges of their highly profitable assault on American lives.

 It's true.  But this has to end now.  The second amendment is quite literally an existential threat to American lives.  More Americans are killed by domestic firearms every month than died on 9/11.

Say it with me.  Say it loud.  Say it often.  Repeal the second amendment.  Repeal the second amendment.  Repeal the second amendment.  Repeal repeal repeal.  Repeat repeat repeat.  The lives of our children literally depend on it.

Friday, April 14, 2023

Bitcoin's value proposition: screwup postmortem

A Blogger user going by the handle Satoshi [1] pointed out that I made a major mistake in my analysis of rental attacks on Bitcoin.  The numbers I was using for the hash rate were off by six orders of magnitude.  But that turns out not to matter because, by sheer luck, I made a second mistake that almost exactly offset the effect of my first mistake.  I've since re-done the math, had it reviewed again by a community of bitcoin enthusiasts, and the upshot is that rental attacks are even less expensive than I originally concluded.

So how did I manage to do such a spectacular double-screwup?  Well, I got my hash rate numbers from a chart that I found on  It looked like this:

Notice that the scale on the left is labelled "TH/s".  But then also notice that the numbers all have an "M" after them.  I missed those M's.

Happily for me, my analysis also relied on a number that I got from a mining rig rental site that turned out to be wrong in much the same way, but that number appeared in the denominator of the math and so the two errors more or less cancelled each other out.

For the record, here is the corrected math.

I chose as a source of data for the performance numbers on current mining rigs not for any particular reason (I have never mined bitcoin so I don't really know much about the state of the art), but the site looked professional, so I assumed that its products are probably legit and competitive.  The key number from that site is that the base efficiency of their hardware is 30J/TH.

The price of electricity varies from about $0.10/kwh in China and $0.18 in the US.

The current difficulty is 48T.

The formula for converting difficulty to hashes/block is:

D * 2**256 / (0xffff * 2**208)

Setting D to 48T yields:

Python 3.8.9 (default, Mar 30 2022, 13:51:16)
[Clang 13.1.6 (clang-1316.] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> D=48
>>> D * 2**256 / (0xffff * 2**208)
>>> _/600
i.e. a hash rate of roughly 340 million TH/s, which is as expected.

The energy cost of maintaining this hash rate is:

340 M-TH/s * 30 J/TH = 10,200 MJ/s = 10 GW

Or, converting to dollars assuming low-cost electricity:

10 GW * $0.10/kWh = $1M/hr

The current block reward is 6.25 BTC/block with a market value of $180k/block ~= $1M/hr.

So this calculation passes basic sanity checks.

For completeness, let's calculate the capital costs.  Browsing mining hardware on Amazon yields a range of about $2k$5k/100TH/s.  It is a little surprising to see such a big spread (2.5x) for what should be a commodity.  But be that as it may, the bottom line is that hardware acquisition costs are about $20-50/TH/s.  So 340MTH/s would cost 7-17 billion dollars.  This seems like a plausible number because the block reward generates 364*24*$1M/hour = $9B/year, which yields a reasonable return on a O($10B) investment if you can get the operating costs low enough.

But the crucial number is the $1M/hr current run/reward rate.  For a hashing rig owner, the capital expenditure is a sunk cost, and so if they can make more money by renting than it costs to run the rig and more than they can expect to make by mining themselves, the rational choice (in the sense of economic rational actor theory) is to rent.

Note that this number is an order of magnitude less than my initial calculation, making the attack all that much more feasible.  I suspect that this is due to the fact that one of the inputs to my original calculation was a questionable data point from MiningRigRentals, and that if you crunched the numbers on their rental rates they would turn out to be (once you got your units straight) 10x what rational choice theory says it should be.  In fact, their help page includes this disclaimer:

"Bitcoin mining is ... not profitable for everyone. Therefore we strongly encourage anyone interested in mining to do his/her own research and make the calculations before investing any money to the operation.  Here at MiningRigRentals most people are speculating on the price of their mined coins..."

It seems to me that there is something very hinky about all this.  If mining is not profitable, that means you can buy coins for less than it costs to mine them, so why not just do that if you want to speculate on future price?  And that applies not just to renting, but to regular mining as well.  The operating costs of mining appear to be just about break-even even with cheap electricity and ignoring capital costs.  So why would any rational actor choose to mine?  Mining is either immediately profitable or it is not.  If it is not, then a rational actor would either rent their hardware to a greater fool, or, if market rates didn't cover the operating costs, pull the plug and use the savings on their utility bill to buy coins instead.  Any long-term deviation from this equilibrium cannot be the result of rational actors, so either rental attacks are plausible, or bitcoin's long-term security depends on systemic deviation from selfish rationality.


[1] When I looked up Satoshi yesterday his (her?) profile indicated that they had been on Blogger since 2012 but their profile had only four views.  (Today they are up to 23.)  That is an extraordinarily long run of stealth.  It is extremely unlikely, but not entirely implausible, that this person might actually be Satoshi Nakamoto.

Wednesday, April 12, 2023

A systematic critique of Bitcoin's value proposition

1. Introduction

This essay was originally entitled "Bitcoin's design contains the seeds of its own destruction".  The thesis was going to be that Bitcoin's security depends entirely on consuming vast quantities of energy, and so any value it might offer is outweighed by its inherent costs.  But when I did the math, that turns out not to be true.  Bitcoin does use a lot of energy, but not nearly as much as I initially thought.  Unfortunately, this is not necessarily good news.  Bitcoin's security is directly proportional to the cost of mining, so the less energy it uses, the less secure it is.  It turns out that there is a plausible attack against bitcoin that could be carried out for just a few million dollars, a sum which is easily within reach not just for state actors and corporations, but also many high-net-worth individuals.

This essay is divided into four sections.  In the first I'm going to review what Bitcoin's value proposition was intended to be.  In the second, I review how bitcoin works.  If you are already familiar with Bitcoin you will find nothing new here.  In the third section I analyze its security model, specifically the cost of mounting a 51% attack on the assumption that hash power is available for rent and doesn't need to be purchased by the attacker.  In the fourth section I discuss the plausibility of carrying out such an attack in the real world, and various counter-arguments that have been presented to me in private discussions.  The bottom line is that when push comes to shove, bitcoin's security ultimately rests on the same foundation as fiat currencies: social cooperation.  The idea that Bitcoin is something fundamentally new, i.e. a currency whose integrity rests on mathematical algorithms and the laws of physics and economics, is thus called into question.

2. Bitcoin's ostensible value proposition

Bitcoin was the first so-called "cryptocurrency", a particular kind of digital currency that relies on cryptographic algorithms rather than a trusted third party to maintain its integrity.  The original Bitcoin paper by Satoshi Nakamoto (a pseudonym whose real identity remains a closely guarded secret) set forth the following rationale for its creation:

"Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model. Completely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes. The cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions, and there is a broader cost in the loss of ability to make non-reversible payments for non- reversible services. With the possibility of reversal, the need for trust spreads. Merchants must be wary of their customers, hassling them for more information than they would otherwise need. A certain percentage of fraud is accepted as unavoidable. These costs and payment uncertainties can be avoided in person by using physical currency, but no mechanism exists to make payments over a communications channel without a trusted party."

In other words, the usual methods of mediating electronic commerce using a trusted third party (TTP) are deficient because 1) transactions can be reversed, 2) the cost of the TTP is too high, 3) TTP's cannot eliminate fraud, and, as a result, 4) small transactions are not economical.

There is an additional feature of Bitcoin which is described in section 6 of Satoshi's paper.  That section is only three paragraphs long, but its importance vastly outstrips its length.  I quote it here in its entirety:

"By convention, the first transaction in a block is a special transaction that starts a new coin owned by the creator of the block. This adds an incentive for nodes to support the network, and provides a way to initially distribute coins into circulation, since there is no central authority to issue them. The steady addition of a constant of amount of new coins is analogous to gold miners expending resources to add gold to circulation. In our case, it is CPU time and electricity that is expended.

"The incentive can also be funded with transaction fees. If the output value of a transaction is less than its input value, the difference is a transaction fee that is added to the incentive value of the block containing the transaction. Once a predetermined number of coins have entered circulation, the incentive can transition entirely to transaction fees and be completely inflation free.

"The incentive may help encourage nodes to stay honest. If a greedy attacker is able to assemble more CPU power than all the honest nodes, he would have to choose between using it to defraud people by stealing back his payments, or using it to generate new coins. He ought to find it more profitable to play by the rules, such rules that favour him with more new coins than everyone else combined, than to undermine the system and the validity of his own wealth."

(Side note: the British spelling of "favour" might be a clue to Satoshi's identity :-)

So Bitcoin ostensibly offers the following value proposition: 1) a non-inflatable currency with 2) irreversible transactions, leading to 3) reduced fraud and 4) lower transaction costs (because you no longer need to pay a TTP) and, as a corollary 5) making practical small transactions which are too costly under the TTP model.

I believe that all of these claims can be called into question, but I'm going to save most of my critique to the end and focus first on Bitcoin's security because that dominates all other considerations.  If Bitcoin is not secure, if it is vulnerable to an attack that undermines the integrity of the block chain, that dominates all other considerations.  Even if all of the other claims are true, it doesn't much matter if the whole system can be blown to smithereens at any time.

I'm going to start by briefly reviewing Bitcoin's security model for the benefit of my less-technical readers.  If you are already familiar with how Bitcoin works under the hood feel free to skip the following section.  You will find nothing new there.

3. The Security Model

Without a TTP, how do you insure the integrity of the system?  Specifically, how do you guarantee that everyone agrees how many bitcoins each participant in the system owns, and how do you enforce the limit on creating new coins?

Bitcoin's answer to this consists of three main components: digital signatures, a block chain (also known as a Merkle tree), and mining.

A digital signature is a little snippet of data that is associated with a document and another little snippet of data called a secret key.  Digital signatures have two key (no pun intended) properties: first, they are easy to generate, but only if you know the secret key, otherwise it is essentially impossible.  And second, it is easy for anyone to verify that a signature was in fact generated by someone who knows the secret key.  Furthermore (and this is the real magic) they can do this verification without knowing the secret key.  This technology dates back to the 1970s, though the particular version used by Bitcoin is more recent.

Bitcoin transactions are authorized by digital signatures.  You can think of a secret key as corresponding to a checking account – in bitcoin-speak these are called "wallets".  A bitcoin transaction is a digital document that says "Move X coins from wallet X to wallet Y" and is signed using the secret key corresponding to wallet X.  The important upshot of this is that control of the coins in a wallet is determined entirely by knowing the secret key.  If someone steals the secret key, they can (and almost certainly will) steal the coins in that wallet.  Likewise, if a secret key is lost, any coins in the corresponding wallet are irretrievably lost.

Digital signatures by themselves are not enough to insure the integrity of the system because nothing prevents someone from signing transactions on a wallet that total more money than it contains.  This is the so-called "double-spend" problem, though this is a bit of a misnomer.  A more accurate name would have been the "overdraft" problem, but double-spend is firmly established terminology, so I will use it here.

To prevent double-spending, bitcoin transactions are assembled into a ledger that sorts the transactions into a (partial) order.  This ledger is the so-called block-chain, and it is called that because transactions are first collected into batches called "blocks" and then the blocks are strung together in a chain.  If someone wants to verify that a transaction is valid, i.e. that the wallet that the transaction sources its funds from actually contains those funds, they can consult the block chain to see that wallet's current balance.  Again, there are cryptographic protocols in place to insure that no one can meddle with the block chain once it is established.  Like digital signatures, this is not new.  The technical term for a block chain is a Merkle tree, after Ralph Merkle who first published the idea in 1979.

The main innovation in Bitcoin's design is mining, which was derived from an earlier scheme called HashCash.  The details don't matter much.  The name derives from the fact that it involves a particular kind of computation called "hashing", which allows you to construct computational problems that are very hard to solve, in fact, so hard that the most effective way of solving them is to simply try solutions more or less at random until you happen to stumble on one that works.  Once you have a solution in hand, it is easy for anyone to verify that it is in fact a solution.  The puzzles can be constructed in a way that is specific to a particular document, so if you have a solution to one of these puzzles for a document, it proves that you (or someone) spent a lot of computing power constructing it.

The original idea behind HashCash was to use it as an anti-spam measure: email senders would include a solution to a difficult-to-solve puzzle bound to the contents of the email they were sending as proof that they had expended a lot of computational effort to send that email, and so it was less likely to come from a spammer.  Bitcoin's innovation is to take this idea and turn it into a digital lottery: whoever is the first to solve one of these difficult puzzles wins the lottery, and gets to decide which block of transactions become the next official block in the block chain.  They also get to include a transaction that creates some bitcoins out of thin air and deposits them in a wallet of their choice (presumably one whose secret key they control).  Anyone can participate in this lottery.  The more computing power they throw at it the more likely they are to win.  Conversely, the more computing power everyone else throws at it, the less likely they are to win.  In this way, the decisions about which transactions to include are (one hopes) made by different entities at different times, and no one party ever has the power to pull shenanigans, at least not for very long.

It should be noted that although distributing the block chain in this way is bitcoin's central innovation, most of bitcoin's claimed benefits accrue not because the block chain is distributed but rather because it is public.  A TTP could maintain a public block chain, and this would have almost all of the benefits of a distributed block chain.  Transactions would still be irreversible, the currency could be made non-inflatable, etc.  The only power that a TTP maintaining a public block chain would have is the ability to censor transactions, i.e. to refuse to record them.  But even this could be addressed by having a side-channel for publishing transactions which, if they lingered too long without being recorded, would damage the TTP's reputation.  Bitcoin actually has a similar feature built in called the "mempool", a collection of all transactions that have been submitted but not yet mined.

The only remaining problem with a TTP is how to compensate them for their services.  A TTP managing a block chain is necessarily a monopoly and deciding who gets to control that monopoly is a thorny political problem.  But censorship and compensation are the only two problems that mining actually solves.

4. Fifty-one-percent attacks

It is possible (though extremely unlikely) for two people to win the bitcoin lottery at more or less the same time.  In a situation like that the conflict is resolved in the next round of the lottery.  Every time you buy a bitcoin lottery ticket you have to decide ahead of time which of several possible competing blocks in the ledger you want to extend.  Conflicts are eventually resolved by a simple rule: among sets of competing blocks, the longest chain of blocks is the One True Chain.  So even if by chance two (or more) people should get winning tickets at more or less the same time, the odds of this happening again on the next round are very small, and the odds of it happening over an extended period of time by pure chance asymptotically approach zero.  Sooner or later, an unambiguous winner will emerge.  In actual practice, the system is designed to produce a winner (and hence a new block) about every ten minutes.  If there is any doubt about which of several competing chains is the One True Chain, that will almost certainly resolve itself within an hour or so.  This is the reason you will often see references to how many confirmations a bitcoin transaction has to have before it is considered valid.  The more confirmations, i.e. the deeper a transaction is in the chain, the more likely it is to be part of what ultimately turns out to be the One True Chain.

There is, however, a fly in the ointment.  Someone could attempt to intentionally disrupt the system by deploying enough computing power to extend an alternate chain.  This is called a "51% attack" because the attacker would have to control at least 51% of the computing power being devoted to buying bitcoin lottery tickets around the world, and this would be very expensive.  How expensive?  That turns out to be the crux of the matter.

The bitcoin algorithm is very cleverly designed to keep the cost of lottery tickets and the odds of winning very carefully balanced so that a winning ticket appears about every ten minutes, independent of how much computing power is being thrown at it around the world.  If, on any given round, a winner appears much sooner than the target ten minutes, the odds of winning the next round are adjusted to make it harder to win.  Likewise, if it takes much longer than ten minutes, the odds are dialed back down.

Because bitcoins can be traded for actual goods and services, including traditional fiat currencies, buying bitcoin lottery tickets can be a profitable enterprise.  As of this writing (April 2023), one bitcoin is worth about $27,000 and a winning lottery ticket gives you 6.25 of them, or about $170,000.  That amount is awarded every ten minutes on average, so there is some pretty serious money at stake.  If someone can mount a 51% attack for less than $170,000/ten minutes or $1M/hour, it becomes a profitable enterprise.  That is not a huge sum by the standard of governments, large corporations, and many high-net-worth individuals.

However, to mount an attack you not only need to pay the on-going operating cost of the computing hardware (which is mostly the cost of electricity), but you need to *acquire* that hardware.  The capital expenditure of buying enough hardware to mount a 51% attach is around 25 billion USD at current rates, and that is an amount that cannot be casually spent even by affluent governments.  However, it should give one a certain amount of pause because if, say, the Chinese or US government decided to squash Bitcoin, they absolutely could.

(Aside: one of the main uses of bitcoin is to move large sums out of countries that restrict the outflow of capital, e.g. China.  So it is not at all out of the question that the Chinese government might some day decide to take some decisive action to stop this.  Indeed, China has already taken steps in this direction, though to date they have been mostly ineffective.)

5. Rental attacks

The cost of a 51% attack drops dramatically if you can rent the necessary hardware rather than buy it.  Bitcoin mining hardware is available for rent.  Would carrying out a 51% attack on rented hardware be possible?  Would it be practical?  A back-of-the-envelope calculation indicates that the answer to both of these questions is "yes", indeed, that it might be even worse than possible and practical, it might even be profitable.

I'm going to describe that calculation here in broad brushstrokes, but to make it more concise I'm going to revert to technical terminology and talk about "hashes" rather than lottery tickets.  The crucial number that determines the difficulty of a 51% attack is the "hash-rate", the number of lottery tickets being "purchased" by expending computational power.  The numbers are quite staggering by comparison to a normal lottery.  Over the past year the hash rate has ranged between roughly 200 and 350 TH/s (trillion hashes per second).  Multiply that by 600 seconds (ten minutes) and you get between 120 and 210 quadrillion hashes per block.  Let's just round that off and call it 10^14.

The market price of bitcoin over the past year has ranged from about 16 to 45 kUSD, but that is neither here nor there because you can rent bitcoin mining equipment and pay in bitcoin.  Picking a random data point from we can rent a rig with a claimed hash rate of 3.3 GH/s for 5.636683E-4 BTC/hr.  To get 100 TH/s, enough to mount a 51% attack even when the hash rate is at the upper end of last year's range, would cost (10^14 / 3.3 x 10^9)*5.636683E-4 BTC/hr = 17 BTC/hr = 2.8 BTC/block.  The block reward is currently 6.25 BTC/block, so this would not only be profitable, it would be wildly profitable.

Of course, there are obviously some limiting factors we have not taken into account here because if arbitraging bitcoin were this easy someone would have done it already.  The main limiting factor is that to carry out the attack you need to rent 30,000 of the 3.3 GH/s units that we used as our data point, and that number probably doesn't even exist, let alone available for rent.  Nonetheless, this analysis does demonstrate a crucial point: the thing that protects bitcoin from attack is not fundamental economics, because if 51% of the bitcoin network were available for rent at current market rates then a rental attack would be profitable.

Of course, the supply of bitcoin mining hardware is far from perfectly elastic.  Even under idealized assumptions, if someone were to try to rent 30,000 mining rigs, the price would surely rise to meet the dramatically increased demand, and (again under idealized assumptions) it should rise enough to eliminate the profit margin.

However, the block reward is not the only possible way to monetize such an attack.  A successful 51% attack, indeed even a credible threat of such an attack succeeding, would almost certainly sow fear and uncertainty in a wide range of public markets.  An attacker could leverage this because they would have a certain amount of control over when news of the attack broke, so they could (for example) take a short position on a portfolio of financial stocks before launching the attack.  From start to finish, the attack itself would take only a few hours, so the exposure to upside risk would be minimal.  This strategy is not a slam-dunk, but it seems to me like a potentially attractive business proposition with no more than the usual risks and caveats.  Notably, there would be nothing illegal about it (AFAICT, IANAL).

In private discussions I have heard three counter-arguments, none of which I accept (if I did I wouldn't be writing this) but I'll list them here along with my responses just for completeness.

The first is that there is not enough rental capacity to mount a 51% attack, and never will be.  The person who raised this argument didn't provide any data to back it up, but for the sake of argument I will stipulate that the first part is probably true.  However, just because there isn't enough rental capacity today is no guarantee that there won't be enough tomorrow, especially if an attacker starts to buy up the existing capacity and drives the price up to the point where renting is more profitable (on the selling side) than mining.

The second is that, even if this attack succeeds, the worst-case scenario is a chain split.  Bitcoin has had a chain split before (resulting in the creation of bitcoin cash) and survived, so it could survive another one.  The difference here is that the bitcoin-cash split was not caused by an attack, it was caused by a technical disagreement in the bitcoin community.  It was an amicable divorce executed under controlled circumstances.  A split resulting from an attack would have a very different dynamic and likely very different consequences.  In particular, if such an attack turned out to be profitable then that would provide a powerful incentive for it to be repeated.  Even if the first attack failed, someone might try it again using the lessons of the failed first attack to refine their strategy.  At best the result would be a great deal of uncertainty, which would likely result in reduced confidence, which is ultimately the stock in trade of any currency.

The third response is that the mining community would band together to thwart such attacks if one were ever to be mounted.  I am happy to stipulate that this very well might happen, but it is important to note what this implies for bitcoin's security: it means that bitcoin is ultimately not, as is often claimed, protected by mathematics or physics or even economics, but rather by the social cohesion, cooperation, and (dare I say it?) trustworthiness of the mining community.  In other words, at root, bitcoin is not fundamentally different from a TTP, it's just that the TTP is a self-selected group rather than an elected or appointed one.  (And, it is worth noting, you can't just decide to become part of this group, you have to literally buy your way in.  Bitcoin's governance structure is, by design, a plutocracy.)

I want to stress that my argument does not depend on whether a rental attack would succeed.  It suffices that it might succeed.  The strategy I've sketched above is (I claim) prima facie plausible.  There might be something that would prevent someone from actually pulling it off, but it is not immediately evident what that thing would be.  But whatever it is, that is the thing that is currently defending bitcoin against this attack, and that means that the thing that is defending bitcoin against this attack is not currently known.  And that should be deeply worrying to anyone taking a long position on bitcoin's future.

6. Discussion

As long as I'm tearing bitcoin apart I might as well go all the way and critique its other claimed benefits.  To review, those are:

  1. Non-inflatable
  2. Irreversible transactions
  3. Reduced fraud
  4. Lower transaction costs
  5. Practical small transactions

I'll address each of these in turn.

6. 1 Inflation

Bitcoin can be inflated through chain splits and also by policy.  Neither are likely any time soon (notwithstanding that one chain split has already occurred) but both are possible.  There is a strong ideological predisposition against inflation among current bitcoin enthusiasts but it is not clear that this will hold forever.  In particular, as the block reward tends towards a smaller and smaller share of the total market cap, political pressure towards inflation could mount, just as it tends to do with fiat currencies.  Also, if bitcoin ever achieves the goal that some of its adherents aspire to of making it the world's reserve currency, then the outsized holdings of early adopters will become harder to justify and the political pressure towards inflation will increase.  Satoshi Nakamoto, for example, is believed to hold about 1.1 million bitcoins, or just over 5% of the total market cap.  His keys have not been used in many years and are believed lost, but is any sane person really willing to bet the financial well-being of the planet on that?  Are future generations going to be willing to accept that decision made by their distant ancestors, or will they decide, as many before them, that a little inflation might actually be beneficial?

Bitcoin might be inflation-free at the moment, but only for the same reason that some fiat currencies are inflation-free: because the people who control them have decided as a matter of policy that inflation is undesirable.  The only thing that distinguishes bitcoin is that its policy-making is based on one-hash-one-vote.

6. 2 Irreversible transactions

This is probably bitcoin's strongest claim.  Reversing a bitcoin transaction is in fact impossible as a practical matter, and will be under all reasonable future scenarios.

However, irreversibility is very much a double-edged sword.  People make mistakes, or lose their keys, or have them stolen.  Under those circumstances the ability to reverse a transaction can be very desirable.  Of course that does open the Pandora's box of having to adjudicate disputes, which bitcoin mostly eliminates -- by eliminating the possibility of correcting mistakes and restoring stolen coins to their rightful owners by force.  This is not the place to engage in that policy debate.  I think you can probably guess which side I come down on.  I'll just point out that irreversibility is no panacea.  If it were, it would be universally adopted as the de facto standard.  There is a reason that no other irreversible monetary system has ever been widely adopted.  It's not because they are hard to build.

6.3 Reduced fraud

By adopting digital signatures to authenticate transactions bitcoin does eliminate one currently common kind of fraud.  But digital signatures can be adopted to eliminate that fraud without adopting the rest of bitcoin.  Indeed, this has been done throughout most of the world now with the introduction of chip cards to replace magstripes.  (The chips contain secret keys and produce digital signatures using them.)  The only arena where digital signatures are not yet widespread is on-line purchases.  There is no technical impediment to adopting them there, it's just a matter of agreeing on a standard protocol.  (I attempted to do this about ten years ago and failed, but that's another story.)

However, there is a dark side here as well.  Bitcoin eliminates one kind of fraud but replaces it with others.  In particular, if you lose your keys, or entrust them to a third party who decides to defect, then you have no recourse.  Furthermore, the irreversibility of transactions makes coercion more lucrative, leading to the rise of ransomware.  In fact, it is arguable that the rise of bitcoin was the catalyst that birthed ransomware as a global industry.  A thief can now steal your money with impunity from the comfort of their own living room.  It is no wonder so many people are choosing to make a career out of this, especially ones who live in places with lax enforcement.

6.4 Lower transaction costs

This is a theoretical possibility as long as bitcoin's value in terms of its purchasing power continues to rise.  But as soon as this stops, the value of the block reward asymptotically approaches zero, and the only way to fund mining after that (assuming the inflation policy does not change) is fees.  How this will shake out in terms of actual costs is anyone's guess because we are very far from reaching steady-state on that, but there are two things inherent in bitcoins design that will tend to drive fees up.  First, all that electricity that is used to keep the system secure has to be paid for somehow.  And second, the capacity of the network is limited by design.  It is technically possible to change this, but politically it is very difficult.  The last time someone tried the result was the bitcoin-cash chain split.

Even now, when the mempool of pending transactions is large, people sometimes have to pay quite exorbitant fees to get transactions mined in a timely manner (minutes instead of hours or days).  It is unrealistic to expect any commodity whose supply has a hard cap on it to be cheap.

6.5 Practical small transactions

This, I think, is Bitcoin's biggest broken promise, and again, it was foreseeable.  By design, bitcoin transactions take a long time to process, and the smaller the transaction, the less likely it is to be mined in a timely manner.  Furthermore, as noted above, the capacity of the system to process transactions has a hard limit on it which is woefully inadequate for handling the volume of small transactions that occur regularly throughout the world.  Using Bitcoin to buy a coffee at Starbucks was an intriguing novelty at one time, but it was never realistic for large numbers of non-technically-savvy people to use it for day-to-day retail transactions.

7. Conclusion

So does Bitcoin have any actual value?  I'm not sure.  It certainly is not suitable for its original stated purpose of replacing fiat currencies for day-to-day transactions.  This is evident in the fact that the value of Bitcoin is still measured in terms of how many US dollars it takes to buy one.

On the other hand, as I write this, that number stands at just about $30,000, which I find staggering.  Somewhere in my house I have an ancient laptop computer that has somewhere on its hard drive the keys to a wallet containing 0.05 bitcoins that someone gave me for free back in 2009 when bitcoin was first launched.  I noodled around with it for a while, and even tried mining for a few hours, but got tired of hearing the fan on my laptop screaming at me all the time.  So clearly I got something very badly wrong back then and it's entirely possible that I've got something very badly wrong now.  My track record of predicting the future is not great.

I think the main value of Bitcoin in the long run will be as a store of value, comparable to precious metals but easier to move around.  Allowing you to reliably store value without having to physically store and protect an artifact (other than a secret key) has real value, and that might well be enough to sustain bitcoin over the long run.  But if bitcoin offers anything of value as a medium of exchange, I don't see it.


Thanks to Ryan Orr, Joel Dietz, Nemo Semret, and Adam Wildavsky for interesting discussion and feedback on this article.

Saturday, March 04, 2023

Uncomputable things: Chaitin's constant, Busy Beavers, and Kolmogorov complexity

1. Introduction

The other day I was watching this Numberphile video about (among other things) uncomputable numbers when I came across this section around the 6:50 mark where Matt Parker talks about Chaitin's constant.  Strictly speaking, this is a whole family of constants, not a single number, but that doesn't really matter.  What matters is that Chatin's constants are rare examples of numbers which can be unambiguously defined but which are provably uncomputable.  In the video Matt kind of throws up his hands when trying to explain Chatin's constant(s) and why it/they are uncomputable, but it's really not that hard to understand.  Nonetheless, I haven't found a really accessible explanation anywhere so I thought I'd take a whack at it.

Chaitin's numbers (usually denoted omega or Ω) are named after Gregory Chaitin who discovered/invented them some time in the 1980s or 90s (I have not been able to find the original publication, or even a reference to it).  In a nutshell, Ω is defined as the probability that a random computer program will eventually halt or run forever.  Obviously, the value of this number depends on what you mean by "a random computer program", and that in turn depends on the kind of computer you are writing the program for, which is the reason that this is a family of numbers, not just a single number.  But Chaitin's omegas are nonetheless invariably referred to in the singular because the basic idea behind all of them is the same, and the details don't really matter all that much.

2.  The halting problem

I'll start with a short review of the famous Halting Problem, because that is the foundation on which  Ω is built.  The Halting Problem is: given a program P and an input I, does P eventually halt when run with I as its input, or does it run forever?

It turns out that we can prove this problem cannot be solved.  But before we prove it, note that one might be suspicious it can't be solved even without a proof, because if we could solve the halting problem, then we could leverage that result to answer any question about anything that we could render as a computer program.  Want to know whether the Goldbach conjecture is true?  Or the Riemann hypothesis?  Or even the infamous P=NP?  (If you want to get an idea of just how thorny a problem that is, read the introduction of this paper.)  All you would need to do is to write a computer program that systematically searches for counterexamples and then ask if that program halts or runs forever.  If it halts, then there exists a counterexample and the conjecture is false, otherwise it's true.  A solution to the halting problem would put mathematicians and physicists permanently out of business.

Fortunately for both, you can prove the halting problem can't be solved.  This was first done by Alan Turing in 1936. Turing's original paper is challenging to read because there were no computers back then, so he had to pretty much invent the entire notion of "computer program" in a world where programmable computers did not yet exist.  (Back then a "computer" was a human being who did calculations by hand.  The word referred to a profession, not a device.)  But nowadays computer programs are ubiquitous, and we are used to thinking about bits and bytes and whatnot, and that makes the proof a lot easier.

Here is an outline of what we are going to do.  First, we are going to assume that we can solve the halting problem, that is, we're going to assume that we can write a program, which we will call H, which takes as input another program P and returns TRUE if and only if P halts, otherwise it will return FALSE.  Second, we are going to make a second program, which we will call B, which, like H, is going to take a program as input.  But instead of returning TRUE or FALSE, it is going to call H as a subroutine to determine whether the program P that has been given to it halts or not, and then B is going to do the opposite of what H says P will do, i.e. if H says that P halts, then B is going to enter an infinite loop (and hence run forever).  Finally, we are going to run B with a copy of itself as its input, and show that this leads to a contradiction, and hence that our assumption that we could write H must be false.

There are really only two details we need to fill in to turn that outline into a fully fledged proof.  The first is that we need to explain what we mean to run a program with another program as its input.  Turing's original paper spent many pages on this, but today we can simply point to the familiar fact that programs are just strings of bits, and inputs to programs are just strings of bits, and so running a program with another program as its input is no different than running a program with any other kind of input, it just so happens that the string of bits we supply as input happens to correspond to a valid program, which we will just stipulate.  (Or, if you want to be a stickler about it, we can just stipulate that invalid programs halt by producing an error.)

The second detail is a bit thornier.  As we have described it, H is a program that takes one input, a program P, and likewise B is a program that takes one input, which is also a program.  But what about P?  How many inputs does it take?  We have played a little fast-and-loose with this.  Remember, our description of H was that it "takes as input another program P and returns TRUE if and only if P halts" but we said nothing about the input to P.  Does P even take an input?  If so, where does its input come from when H tries to figure out whether or not it is going to halt?

There are several different ways to address this, but the easiest is to change the definition of H so that it takes two inputs, a program P and an input I, and returns TRUE if and only iff P halts when run on input I, and restrict ourselves to only giving H programs that take one input.

I am also going to introduce a bit of notation here: if P is a program and I is an input, then P(I) is the result of running program P on input I.  In the case where a program takes more than one input, like our redefined H, we separate them with commas, e.g. H(P, I).  So H(P, I) is TRUE if and only if P halts when run on input I, that is, if P(I) halts, otherwise it is FALSE.  (Exercise: what is H(H, H)?)

Now we can define our magic program B.  B is going to take one input, a program P, and it is going to call H as a subroutine with P as both the program to be analyzed and the input to that program.  In other words, B(P) is going to start by computing H(P, P).  If the result is TRUE (i.e. if P halts when run on a copy of itself as input) then B is going to enter an infinite loop, otherwise it will halt.

In other words, we will build B so that B(P) will halt if and only if H(P, P) is false, that is, if P(P) runs forever.  Otherwise, if H(P, P) is true, i.e. P(P) halts, B will run forever.

Now, what happens if we run B with a copy of itself as input, i.e. what happens if we try to compute B(B)?  Well, B(B) is going to start by computing H(B, B), which is a special case of H(P, I).  Recall that H(P, I) is true if and only if P(I) halts.  So H(B, B) is true if and only if B(B) halts.  But B(B) halts if and only if H(B, B) is false.  This is a contradiction, so our assumption that it was possible to write H must be false.  QED.

3.  Busy Beavers

So now that we know that the halting problem cannot be solved, we can also know that any information that would allow us to solve the halting problem must be impossible to obtain.  As an example of the kind of information that might allow this, consider the so-called busy-beaver numbers, denoted BB(n), which are defined as the largest number of steps that a computer program of length n could possibly run before halting.  For any n, BB(n) must exist, and it must be a finite integer.  Why?  Because there are only a finite number of programs of length n (in fact, at most 2^n of them), and so there are only a finite number of programs of length n that halt, and so one of them must be the one that runs the longest before halting.

And yet, if we knew the value of BB(n) then we could solve the halting problem for programs of length n.  How?  Simply by running all of the programs of length n in parallel until we ran them all for BB(n) steps!  Any program that runs longer than that must run forever.

So the BB function must be uncomputable.

4.  Chaitin's constant(s)

Another example of information that would allow us to solve the halting problem is the number of programs of length n that halt.  This number doesn't have a common name so I'm going to call them the C numbers, i.e. C(n) is the number of programs of length n that halt.  Again, C(n) is a perfectly well-defined number.  Indeed, C(n) must be an integer between 0 and 2^n, so they are not even a mind-bogglingly big number like the BB numbers are.  And yet, if we could compute C(n) we could solve the halting problem, and so C(n) must not be computable.

Note that C(n) is not a number, it's a (uncomputable) function just like BB is.   Chaitin's constant is a number that is constructed (more or less) by taking all of the C(n)'s, concatenating them together, and interpreting the result as the decimal expansion of a real number.  (And both depend on the details of the computing model, so really they are a family of functions or a family of numbers, but that is not what matters.)

If you look up Chaitin's constant you will find it is defined in terms of probabilities, specifically, the probability that something called a "prefix-free universal Turing machine" will halt on a random program, but all of this is just pedantry.  A "prefix-free Turing machine" is just a way of defining a computational model that allows you to formalize the notion of a "random program of length n", and the probability that such a program will halt is just C(n)/2^n.  Then there's some additional fancy-looking math to pack all of these rational numbers into a single real number in such a way that you can extract the C(n)'s with some more fancy math.

But all of the fancy math obscures the fact that at the end of the day, Chaitin's constant is just a numerical representation of the sequence of C(n)'s concatenated together.  In fact, if you think in binary, it is literally this.  In binary, when you divide an integer by 2^n, all you are doing is shifting that integer to the right by n digits.  Because every C(n) is at most n binary digits long, then shifting each one by n bits twice makes them all line up in a way that they don't overlap.  Then you can just fram them all together, put a decimal point on the left, and bam, you have a number which, if you knew its value, you could reconstruct the sequence of C(n)'s and hence solve the halting problem.

So all of these uncomputable things -- the busy beaver numbers, the C(n) sequence, and Chaitin's constant, are all just ways of "repackaging" the uncomputability of the halting problem.

5.  Kolmogorov complexity

Are there uncomputable numbers that can be defined without reference to the halting problem?  Yes.   Consider a computer program that produces some output, and ask: what is the length of the shortest program that produces the same output?  This question was first asked by Andrey Kolmogorov, and so the shortest program that produces a given output is called the "Kolmogorov complexity" of that output, which I will abbreviate KC.  So KC(n) is a function whose value is the length of the shortest computer program that will produce n as its output.

The proof that KC is uncomputable is also due to Gregory Chaitin, and it looks a lot like the proof that the halting problem is uncomputable.  We're going to assume that KC is computable and show how this leads to a contradiction.

So let us choose a program P that produces an output n, and assume that we can compute KC(n).  Obviously we know how long P is, an so we can tell whether or not its length is equal to KC(n).  If it is, i.e. if P is (one of) the shortest program(s) whose output is n, we will call P an elegant program.  (This is Chaitin's term.  A more neutral term would be "minimal" but I'm going to defer to the master.)

So if we can compute KC, then we can write an elegance tester, i.e. a program E which takes as input a program P and returns TRUE if that program is elegant, i.e. if its length is the same as the KC of its output.  It turns out that E is impossible in the same way that H turns out to be impossible.  To see this, we construct a new program B which works as follows: B is going to take as input some integer I, and start enumerating all programs longer than I and passing those programs to E to see if any of them are elegant.  When it finds an elegant program, it is going to run that program.

Note that B has to produce some output.  Why?  Because there are an infinite number of elegant programs, at least one for each possible output n.  And so sooner or later, B has to find one and produce the same output that it does.

Now let's run B with I set to the length of B plus one.  (Strictly speaking we have to set it a little longer than that, to the sum of the length of B plus the log base 2 of I, but that's a detail you can safely ignore.)  This means that sooner or later, B will find a program P that E says is elegant, and it will run P, and hence produce the same output n as P.  But because B only tests programs longer than B, P must be longer than B, and so P cannot be elegant because B, which is necessarily shorter, produced the same output.  (Again, strictly speaking, it's the length of B plus I that matters, but again, this is a detail.)

So again we have a contradiction, and so KC cannot be computable.

Note that this uncomputability stems from a fundamentally different source than Chatin's constant, which is really just a corollary to the halting problem.  KC has to do with optimization rather than halting, and it has some really profound practical implications.  For example, you can think of mathematical models of physics as computer programs.  The uncomputability of KC implies that we can never know if we have the optimal theory of physics.  Even if some day we manage to unify general relativity and quantum mechanics and create a theory that accounts for all observations, we cannot possibly know if we have found the simplest such theory.  That is fundamentally unknowable.