Saturday, January 28, 2023

Lisping at JPL Revisited

It has been over 20 years since I wrote "Lisping at JPL", half memoir and half geeky screed (and all of it half-baked) about how an obscure programming language came to define my career.  It was a borderline throwaway piece written as much to vent my spleen as to inform.  In the intervening two decades it has gotten a lot more attention than I ever expected, becoming a perennial favorite on Hacker News, launching my (very short) career as a podcast guest, and even inspiring some people to plagiarize it, which make me feel simultaneously flattered and annoyed.

User marai2 over on HN made this suggestion the last time L@J was posted on HN:

Since this essay cycles here on HN perennially and inevitably people are curious about how your views about lisp, or other languages have changed, you might want to have a follow up page that you forever keep on updating as your views change?

So, by request...

My views on Lisp have not changed: Common Lisp is still far and away my favorite programming language, but that is mainly because I don't really program in Common Lisp any more. I program in a little custom language that I have incrementally built on top of Common Lisp over the years. This language consists mainly of a single macro called BINDING-BLOCK which is a stand-alone replacement for almost all of the other binding constructs in Common Lisp and makes my code look very different from most CL code. In particular, my code has a lot fewer parentheses, and it doesn't run off the right side of the screen as much.  (Here is an example.)  A CL programmer looking at my code would probably go, "WAT?" And when I look at regular CL code nowadays I go "blech!" Especially if that code uses LOOP. Then I go double-blech, despite the ironic fact that LOOP and BINDING-BLOCK have a lot in common.

All this is a reflection of the so-called Lisp curse, the fundamental problem with Lisp -- its great strength is simultaneously its great weakness.  It is super-simple to customize Lisp to suit your personal tastes, and so everyone does, and so you end up with a fragmented ecosystem of little sub-languages, not all of which (to put it mildly) are particularly well designed.

But *I* am 10x more productive in CL than in any other language. And not just CL, but one particular CL environment, Clozure Common Lisp, which I have been using since it was Coral Common Lisp running on a Macintosh Plus with one 800k floppy disk and one megabyte of RAM. CCL is probably the most under-appreciated engineering marvel ever produced by any species other than termites. I have been using it for 38 years. (Unfortunately, that era is likely drawing to a close.  Clozure Associates is no longer a going concern, and the volunteer effort to port CCL to the M1 appears to have stalled. This makes me very sad. It is truly the end of an era, and a very long one by technological standards.)

People often ask me what I think about other languages, and Clojure in particular. My answer is that I have never used Clojure, but from what I have read it looks pretty neat. But I think no other language can ever possibly be better than Common Lisp for me because I have customized the hell out of my environment until it is exactly the way I want it. It is quite literally the perfect language for me because I have made it that way -- because I was able to make it that way. Any time I find something about it I don't like, I can change it quickly and easily. Nothing that someone else designs can ever compete with that. Even I can't compete with that. Every now and then I toy with the idea of building my own Lisp from scratch, and I once took a semi-serious whack at it as an exercise to try to learn C++, but I gave up when it became clear that the effort required to make it competitive with CCL plus 38 years of incremental tweaking was vastly more than I was willing to invest.

There is one feature of Common Lisp that I absolutely love which no other language offers AFAIK, and that is a generic-function model of objects.   IMHO, the goal of a programming language ought to be to make writing code as easy and free of cognitive load as possible.  If I want to do an operation, I want to be able to say (op arg1 arg2 ... argn) without having to think about whether OP is a plain function or a method or an operator.  In every other language I have to think about whether to write op(arg1, arg2 ... argn) or arg1.op(arg2, ... argn) or "arg1 op arg2" or, in Objective C, [arg1 op:arg2 someRandomThing:arg3 ... someOtherRandomThing:argn].  In Lisp I never have think about that.  I just write (op arg1 .. argn) and it does the right thing.

So the bottom line is: I still love Common Lisp.  YMMV.

My second favorite language at this point is Python.  It has a lot of nifty features, and a clean design that is mostly free of gotchas.   Ironically, I don't like Python's most iconic feature, the syntactically significant white space.  Python claims to not use braces, but this is half a lie.  It does use an open brace, it just uses the colon character to serve that purpose.  So in my code I use the "pass" statement as a de-facto close-brace, so when I use emacs python-mode, my code always auto-indents correctly.

But the best programming language is like the best wine.  Different people like different things, and that's OK.  Drink whatever is the best fit for your palate, and code in whatever is the best fit for your brain.

Wednesday, January 25, 2023

An intuitive counterexample to the Axiom of Choice: followup

 What was intended to be a short throwaway blog post about someone else's nifty idea regarding the Axiom of Choice (AoC) ended up getting a lot more attention and being much more widely misunderstood than I ever expected, so I decided to follow up with a more detailed explanation.

Note that I deliberately chose the word "detailed" and not "rigorous".  Although I am going to try to be a little more rigorous as well, this is emphatically not a "proof" of anything, and particularly not a proof that the AoC is "wrong".  Such a proof is not possible.  The AoC is, as its name implies, an axiom.  It is a free choice to do mathematics with it or without it, and either choice leads to some weird results.  This is simply an informal argument that the AoC is not self-evidently true.  It is analogous to someone before Gauss saying about Euclid's fifth postulate, "Suppose we tried to do geometry on the surface of a sphere..."

The AoC sounds innocuous enough when you first hear it: given a collection of non-empty sets, it is possible to construct a new set whose members consist of one member chosen from each of the given sets.  How could this not be true?  Note that this question is analogous to: Euclid's fifth postulate says that given a line an a point not on that line, you can construct a unique line parallel to the given line.  How could this not be true?  And the answer is: it can be non-true if you are doing geometry on a curved surface.

The answer in the case of the AoC is something like: it can be not-true if the sets you are dealing with are really weird.  My post was an attempt to give a specific example of that kind of weirdness.  I ended up tripping over playing too fast-and-loose with the notion of "describable".  So here I am going to take another whack at it, trying to be a little less sloppy.  Note that less-sloppy does not mean rigorous.  Like I said, this is not an attempt to prove anything, notwithstanding that parts of what I am about to say might sound a bit like a proof.

So with that in mind, let us start with a few elementary observations.

1.  There are at most a countably infinite number of finite-length unicode strings.

2.  Some unicode strings are unambiguous descriptions of numbers under certain uncontroversial background assumptions.  Examples include "123", "pi", "e", "the successor of zero", and "the golden ratio".  Other unicode strings are unambiguously not unambiguous descriptions of numbers.  Examples include "hamburger", "irvnwoihdphweg" and "😱".  And still other unicode strings may or may not be unambiguous descriptions of numbers.  An example of this is "the smallest counter-example to the Goldbach conjecture."  That might be an unambiguous description of a number, or it might not, depending on whether the Goldbach conjecture is true.

A lot of the discussion generated by my original post turned on this last point, that it is very hard, arguably impossible, to pin down what an "unambiguous description of a number" actually means.  The meaning of "unambiguous description of a number" is ambiguous.  But this will turn out not to matter.

Here are a few more observations that will turn out not to matter.  I include them just to short-circuit some of the discussion.

3.  All computable numbers have unambiguous descriptions, to wit, the algorithms that compute them.

Note that unambiguous descriptions need not be unique.  "One", "1" and "the successor of zero" are all unambiguous descriptions of the same number.  Two different algorithms that compute the same (computable) number are each an unambiguous description of that number.

It is tempting to conclude that the number of unambiguously describable numbers must be strictly less than the number of computable numbers, except that...

4.  Some uncomputable numbers have unambiguous descriptions.  Chaitin's constant is the canonical example.  It is not known (and probably unknowable) how many uncomputable numbers have unambiguous descriptions.

So now we get to the thorniest bit.  Some commenters claimed that:

5.  It is not possible to give an unambiguous description of what an "unambiguous description of a number" means, because any such putative description is subject to Cantor's famous diagonalization argument.  In this case the argument goes like this: suppose we had an unambiguous description of what an unambiguous description means.  Then we could make an (infinite) list of all the numbers with unambiguous descriptions, and then construct a new number with an unambiguous description that is not on that list.

This is true, but I claim it is irrelevant for two reasons.  First and foremost, I don't need an unambiguous description of unambiguous descriptions for my argument to hold.  All I need is to show that there exist numbers that do not have any unambiguous description, and for that all I need to show is that the number of numbers with unambiguous descriptions is (at most) countable, and all I need for that is to stipulate that an unambiguous description must be renderable as a finite-length unicode string.  The number of finite-length unicode strings is countable, and so the number of unambiguous descriptions must be countable, and so the number of numbers with unambiguous descriptions must be countable.  But since the reals are uncountable, there must remain an uncountable number of reals with no unambiguous description, at least not one that is renderable as a finite-length unicode string.

So I don't need an unambiguous description of unambiguous descriptions.  All I need is an oracle for it, and any oracle will do.

That is sufficient reason that diagonalization doesn't matter, but since this is an informal argument I'm going to point out something else: Cantor originally advanced diagonalization to prove that the reals are uncountable.  Far from invalidating my argument, I am actually relying on that result!  The whole point is that it doesn't matter what an unambiguous definition actually is, or that we can't necessarily tell whether a given unicode string is one or not (c.f. "the smallest counter-example to Goldbach's conjecture").  We have some strings that are (by social convention) unambiguously unambiguous descriptions (which I am now going to start abbreviating UDs), and others that are unambiguously not UDs, and still others that might be or might not.  None of this matters.  Take any description of what it means to be a UD and I will happily stipulate that the Nth diagonalization of the list of numbers generated by that specification is also a number with a UD, and so the original specification must have been incomplete or even inconsistent.  You can only diagonalize a countable number of times, which is the whole point of the diagonalization argument.  If you think that diagonalization invalidates my position, then you either don't understand diagonalization, or you don't understand the argument I am advancing.

None of the above matters, because the conclusion I am trying to defend has nothing to do with the details of what it means to be UD.  My claim, which I would have thought would be obvious and wholly uncontroversial, is that there must exist an uncountable number of numbers that unambiguously do NOT have unambiguous descriptions under ANY reasonable definition of an unambiguous description.  And the argument for that is simply that the reals are uncountable, but any reasonable definition of unambiguous description will yield only a countable number of them, and hence only a countable number of numbers they can possibly describe.

I could even go meta and point out that a reasonable condition on reasonable definitions of unambiguous descriptions (or anything else for that matter) is that they be renderable as a finite string of unicode characters, and so there can be at most a countable number of "reasonable" definitions of UD.  No matter how you slice it, you are still left with an uncountable number of numbers that unambiguously lack any unambiguous description.

Henceforth I will adopt the abbreviation NULUD for Number Unambiguously Lacking an Unambiguous Description.  So there are an uncountable number of NULUDs.  The NULUDs are a proper subset of the set of uncomputable numbers.

OK, but so what?  So there are an uncountable number of NULUDs.  How does that cast doubt on the validity of the axiom of choice?

Well, imagine that I gave you two NULUDs and asked you if they were equal.  Think about that for a moment.  For starters, what would it even mean for me to "give" you two such numbers?  I can never actually exhibit a NULUD, or even describe one.  I can't write one down.  I can't given you an algorithm to compute one.  There is no way for me to communicate any NULUD to you by finite means.  For me to even contemplate "giving" you two NULUDs I have to do something like postulate some sort of oracle for them which offers finite clues about their identities, something like spitting out successive digits of their decimal expansions.  But then what?  You could start comparing the outputs of the oracles, and if the numbers are not equal you will eventually find out.  But if they are equal you will never know!  Comparing NULUDs for equality is analogous to trying to solve the halting problem by running the program and waiting to see if it halts.  So not only do you need to postulate oracles for the numbers themselves, you have to postulate an oracle for their equality!  That's an awful lot of postulating.

In fact, an oracle for equality of two NULUDs is a mighty powerful oracle.  Such an oracle is even more powerful than an oracle for the halting problem, which only has to deal with computable numbers (of which there are only countably many).

What does any of this have to do with the AoC?  Well, if you are going to "choose" a number from a set you need to be able to determine whether the number you chose is a member of the set, i.e. whether or not it is equal to a member of the set.  To do that for a set of NULUDs you need an oracle for equality of NULUDs, which means you need an oracle that is more powerful than an oracle for the halting problem.

So you see, asking someone to accept the AoC for all sets is asking an awful lot.  The AoC hides a lot of power.

But let us grant ourselves all of this extraordinary oracular power.  I claim that you will still not be able to describe a procedure by which you can choose a member of an arbitrary set of NULUDs.  Again, think about it.  You can't just treat the set like a bag where you can just reach in and grab some of its contents.  The contents of a set of NULUDs is a bunch of things that can't be described, and so the set itself cannot be described.  The best you can do, given what we have allowed ourselves, is to somehow pick an arbitrary NULUD and invoke the equality oracle on every member of the set to see if you got lucky.  That process is not guaranteed to converge.

Now, a mathematician (as opposed to a computer scientist like me) would read the above talk of oracles and procedures and say that this is all rubbish, that all I need to do to "give" you two NULUDs is to say, "Let x and y be two (un)equal NULUDs."  And what we need for the AoC  is not a procedure but a function, specifically a choice function.  And here, they will say, there is no problem because there are many functions that can be defined perfectly well on NULUDs.  f(x) = x+1 for example.

I don't deny this.  But defining a function on NULUDs is not the same thing as defining a choice function.  A choice function is not a function on NULUDs, it is a function on arbitrary sets of NULUDs whose value is a NULUD that is guaranteed to be a member of the set.  That is a very tall order.  I challenge anyone to come up with a description of such a function, even an informal one, that does not invoke the AoC.  (That would, of course, be cheating.)  And if no one can come up with a description of such a function, a reasonable person could hypothesize that this is because such a function does not exist.  Indeed, the AoC exists because the only way we can produce such a function in some circumstances is by postulating it.  If we could describe a choice function for all circumstances, we would not need the AoC.  Of course, I cannot prove that NULUDs are such a circumstance, but it seems plausible to me.  I can't even begin to imagine how you would do it.  (But maybe some mathematician out there can enlighten me.)

Of course, none of this disproves the AoC.  Like I said at the beginning, the AoC is not false (at least not under standard set theory).  My point here is just to describe a set of circumstances under which it is not self-evidently true.

Monday, January 23, 2023

An intutive counterexample to the axiom of choice

Time for some hard-core geeking-out.

This comment on HN by /u/jiggawatts struck me as a brilliant idea: it's an intuitive counter-example to the axiom of choice, which seem intuitively obvious, but leads to weird results like the Banach-Tarski paradox.

For those of you who are not hard-core geeks, the axiom of choice says (more or less) that if you are given a collection of non-empty sets, you can choose a member from each of those sets.  That seems eminently plausible.  How could it possibly not be true?

Here's how: consider the set of numbers that cannot be described using any finite collection of symbols.  Such numbers must exist because there are only a countably infinite number of numbers that can be described using a finite collection of symbols, but there are an uncountably infinite number of real numbers.  So not only are there numbers that cannot be described using a finite number of symbols, there are vastly more of these than numbers that can be so described.

And yet... how would you describe such a number?  By definition it is not possible!  And so it is not at all clear (at least not to me) what it would even mean to "choose" a number from this set.

This is, of course, not a proof that the axiom of choice is wrong.  It's an axiom.  It can't be wrong.  But it is a good example for casting doubt in its intuitive plausibility, and that feels like progress to me.

Thursday, January 12, 2023

I am offended by Muslims being offended

An adjunct professor at Hamline University  was fired for showing an historically significant painting of the prophet Mohamed, peace be upon him.  (Note: I did not add PBUH to be facetious.  PBUH is an honorific, like Ph.D.  I added it to show respect to Muslims who believe that Mohamed was in fact a prophet, notwithstanding that I believe they are wrong.)

This is the painting in question:

 


From the NYT article:

The painting shown in Dr. López Prater’s class is in one of the earliest Islamic illustrated histories of the world, “A Compendium of Chronicles,” written during the 14th century by Rashid-al-Din (1247-1318).

Shown regularly in art history classes, the painting shows a winged and crowned Angel Gabriel pointing at the Prophet Muhammad and delivering to him the first Quranic revelation.

Much ink has already been spilled debating the merits of Hamline's actions and I don't have much to add, but there is one thing that I haven't seen anyone point out yet: this is not a painting of Mohamed.  This painting was made centuries after Mohamed died.  The artist had no idea what Mohamed actually looked like.  No human who was not alive in Mohamed's time knows what he looked like because there is no record of what he actually looked like.  The image above is either a likeness of a contemporary (to the artist) model, or a product of the artist's imagination.  This is an image that the artist imagined to be a likeness of Mohamed, but actually isn't.

So Muslims who are offended by exhibiting this painting are not offended by the exhibition of a likeness of Mohamed, because there are no likenesses of Mohamed.  Muslims being offended by this image are being offended not by the image, but by the (false) claim that this image is a likeness of Mohamed.  They are offended not by the image but by the imagination of a long-dead artist, one who, ironically but not insignificantly, was himself a Muslim.

If you think that is a reasonable thing to pander to, well, then I am offended by you.  Again, I am not saying that to be facetious.  I am sincerely offended by your belief that I have an obligation not to offend you.  And now we have a problem: should my offense be sufficient cause to make you lose your job because you want me to lose my job for having offended you?  Because if your answer is yes, then we are headed for 100% unemployment.  No one will be able to work anywhere if a necessary condition for keeping your job is to not say or do anything that offends anyone.

On the other hand, if you think your offense is somehow privileged, that you should get to keep your job despite having offended me while I have to lose mine for having offended you, now we have a different problem: you now have to provide some justification for what makes your offense so much more worthy of accommodation than mine.  And you cannot ground your justification in religion because a belief in freedom of expression is part and parcel of my most deeply held beliefs.  The only way you will be able to justify your privileged position is to argue that your deeply held beliefs should trump mine for some reason.

(Aside: freedom of expression definitely does not mean that a private entity cannot exercise editorial control over what appears in venues it controls.)

There are only three alternatives here: either Muslims are elevated to a privileged position where their offense trumps everyone else's, or they have to be told by society to suck it up, or we enter a death spiral of social paralysis where no one can do or say anything out of fear of offending someone.

Figuring out which alternative I would advocate is left as an exercise for the reader.