Tuesday, April 29, 2008
A "conversation" with Tim Keller
[UPDATE: It turns out I may have gotten at least one important aspect of Tim Keller's theology wrong. Keller is a Presbyterian, and hence a Calvinist, and hence believes in predestination. Whether or not this means that he does not believe in free will I cannot say. To me, predestination is logically incompatible with free will, but I have no idea if Keller agrees with this. See the postscript at the end for more about why I cannot correct this error.]
Ron: Hello, Rev. Keller. Thank you for taking the time to talk to me.
Tim: You are most welcome. It is no great imposition, particularly since you are letting me borrow some of your brainpower to hold up my end of the conversation.
R: Yes, that will actually come to be quite relevant later on in our little chat. But I'm getting way ahead of myself. I invited you here because I saw the video of the talk you gave at Google and I was quite impressed (not convinced, mind you, just impressed), enough that I was motivated to buy and read the book you were plugging...
T: Please, I was not "plugging" my book. It's just that the issues I raise are so complex and nuanced that an hour is not nearly enough time to do them justice, so I have to point people to the book for the details.
R: I apologize for my poor choice of words. Let us return to the matter at hand. I think you are striving towards some very important goals, but I disagree with some of the conclusions that you come to. I would like an opportunity to challenge you on some of these points, and maybe even reach some common understanding. (That is what you're aiming for, isn't it?)
T: Indeed it is. The central motivation for my book was to address the increased polarization of society between faith on the one hand and secularism on the other. Both sides are growing increasingly belligerent in their rhetoric, and I fear that if this trend is not reversed the result will be social catastrophe.
R: Yes, I completely agree. So let us build this exchange on the foundation of that bit of common ground. You also argue that secularism requires no less a leap of faith than religion.
T: That's right. All beliefs are grounded in unprovable assumptions, which is to say, in faith of some sort. But the defense of my position is rather lengthy and is laid out in detail in...
R: ... your book, yes, I know. If I may, I think we can short-circuit this part of the conversation because I actually agree with you. Science (or secularism or atheism or whatever you want to call non-belief in God) requires just as much a leap of faith as any religion. In fact, I have encouraged (to the extent that a non-academic like myself is able to do so) my secular brethren to embrace the idea that Science (with a capital S) is a religion. I think it's a much stronger position the the usual view that Science is fundamentally different from all other belief systems.
T: Well, I find that quite disarming. You are the first atheist (do you mind if I call you that?) that I have met that has conceded that point so quickly. The amount of common ground we are finding here (without even trying very hard) is quite remarkable. Perhaps we will find that we agree on everything and we can cut short this entire conversation and go get a bite to eat.
R: You can call me an atheist if you must, but I don't like the term because it has too much baggage associated with it. In particular, I do not identify with much of the vitriolic rhetoric coming from people like Dawkins and Harris. That's one of the reasons I wanted to talk to you, because I think there's a chance for us to make some real progress here. Alas, I am not quite so sanguine about the possibility of adjourning by lunch time, but who knows? Stranger things have happened.
T: Indeed. Miracles happen all the time.
R: We'll see. But as long as we're on a roll I'd like to agree with another one of your propositions, which is that one cannot prove that God does not exist. Moreover, I'm sure that Dawkins and Harris would readily concede this. You cannot prove a negative. Dawkins' answer to this is also my answer: indeed you cannot prove that God (by which is meant the god of Abraham) does not exist, but neither can you prove that Thor or Kthulu or the Flying Spaghetti Monster does not exist.
T: I think there's a lot more evidence for Jesus than there is for the Flying Spaghetti Monster.
R: Perhaps. The point is just that your argument that God's existence cannot be disproven is a straw-man. Even the most strident atheist will readily concede that point.
T: Very well. What is your position then?
R: I'll get to that in a minute. But first, I'd like to propose one more bit of common ground that we might use as a foundation for this discussion. (I predict this will be the last one.) See this basketball?
T: Well, no, actually I don't.
R: OK, work with me here. Suspend your disbelief for a moment (have a little faith?) and imagine that we are really having this conversation, and that I am holding a basketball in my hands.
T: Very well.
R: Do you believe that this basketball exists?
T: I sense a rhetorical trap being set, but OK, I'll bite. Yes, I believe this basketball exists.
R: I give you my word that this is not a rhetorical trap. The point I want to make is not nearly so facile. The only reason I'm resorting to making it with an imaginary basketball is that you're not really here. If you were to make some time to actually meet with me I would make the exact same argument with a real, physical basketball. (It doesn't even have to be a basketball. Any every-day object will do. I just happened to pick a basketball because I thought the word had a certain pleasing way of rolling trippingly off my keyboard.)
T: Very well. Since we seem to be building a relationship of mutual trust and respect here I will concede the point and stipulate that this basketball exists.
R: Good. Now, I will go further and claim that at least part of the reason that you believe that this basketball exists is that you can directly experience it. If I dribble it [bounce!] you can hear the sound it makes. If I toss it to you (think fast!) you can feel it. Yes?
T: Well, applying suspension of disbelief here (since in point of fact this basketball does not really exist), yes, I will agree that if there were a real basketball here, my direct physical experience would figure prominently in the thought process that leads me to conclude that it exists. However there is still a leap of faith involved because I have to assume that my senses and thought processes are reliable. I can't prove that.
R: Yes, I thought we had already agreed that at root everything requires some leap of faith. But here, let me help you with your suspension of disbelief.
[A basketball suddenly appears out of nowhere.]
T: Say, that's a pretty neat trick. How did you do that?
R: A magician never reveals his secrets.
T: So let me try to anticipate where you're going with this. You want to use this basketball as an example of a material object whose existence no one doubts. Is that right?
R: Wow, you're good. It's almost as if you could read my mind.
T: Yes, well, my ESP is working better than normal today.
R: Indeed. So yes, that is exactly right. The last bit of common ground that I want to establish is that there are things in the world -- like basketballs -- whose existence is wholly uncontroversial at least in part because they can be directly experienced. I do not claim that this in any way proves that this basketball actually exists in a metaphysical sense, only that we -- and most people in the world -- can agree that it exists (even though we cannot definitively eliminate the possibility that we could be wrong). In fact, we can reach such a strong level of consensus that if we met someone who genuinely denied the existence of this basketball we would question their sanity.
T: Or think them to be a philosopher.
R: Doesn't that amount to the same thing? [wink]
T: If I were really here I would give you a wry look. But in any case, I am willing to accept the existence of everyday material objects as uncontroversial, at least within the context of a discussion of the social discord brought about by the polarization of secularism and faith.
R: Good. I'm pretty sure that's the last thing we'll agree on for a while.
T: Don't be so pessimistic.
R: Don't underestimate my prophetical abilities. I am going now going to argue that by your own standards atheism is better than Christianity.
T: Hm, you may well have been correct about our leaving common ground behind for a while.
R: Yes, I thought as much. So let us make sure we're on the same page about what your standards are. In your book you argue that Christianity ought to be taken seriously (at least) because it offers the best hope for bringing people together and healing the societal rift between the religious and the secular.
T: Again oversimplified, but basically correct.
R: OK, first I would like to point out that this is at odds with what Jesus himself said. "Think not that I am come to send peace on earth: I came not to send peace, but a sword. For I am come to set a man at variance against his father, and the daughter against her mother, and the daughter in law against her mother in law." (Matthew 10:34-35)
T: You are quoting those verses out of context, and seriously misinterpreting what they mean. Jesus is simply prophesizing (correctly, I might add) what the result of his ministry will be. It wasn't his intent to bring about discord. It is man's imperfection and inability (or unwillingness) to accept His Word that causes it.
R: I have always found it odd that a supposedly all-powerful God can be rendered impotent by man's obstinance (or imperfections).
T: Now you are the one erecting straw-men. God is not impotent. He chose to give us free will. It is one of His greatest gifts to us.
R: Raymond Smullyan had some interesting things to say about that. But we must be careful not to get distracted by too many tangents or we'll be here all day. My point is just that empirically, Christianity has not been particularly effective at bringing about the social synthesis that you seek, and that was anticipated by no less than Jesus Himself. Indeed, in the U.S. most of the beligerance on the religious side of the social divide comes from people who call themselves Christians.
T: I most emphatically do not see eye-to-eye with the Westboro Baptist Church.
R: I didn't think you did. The fact that you do not agree with them is precisely my point: even among those who call themselves Christians there are huge disagreements about what Christianity is all about. And these disagreements go back to the very beginnings of Christianity. It wasn't until the Council of Nicea, three hundred years after Jesus's death, that Christians even managed to agree on whether or not Jesus was divine. And this sort of thing permeates the history of the church even to this day.
T: Well, you have Dawkins and Harris (and Hitler and Stalin). We have Fred Phelps.
R: Hitler was a Catholic, but again let us not get distracted by tangents. The point is not that Christianity has its extremists. The point is that Christianity cannot even heal the rifts within itself. That does not bode well for Christianity as a path for healing the rifts in society as a whole.
T: I never said it would be easy.
R: Indeed not. But the fact that it is not easy cannot be so lightly dismissed.
T: I do not dismiss it lightly.
R: But you just did. You said, "I never said it would be easy," and left it at that as if there was nothing more to be said. But (and this is crucial) the fact that it is not easy completely undermines your position.
T: I don't see how.
R: Well, your position is that we are saved from sin and evil through the death and resurrection of Jesus. But the mere fact of his death and resurrection are not enough. You have to believe that Jesus died for our sins in order to reap the benefits.
T: That is technically correct, but your choice of words is misleading. It is not like God is playing some kind of game where he challenges us to profess belief in some arbitrary incredible thing or be damned for all eternity. God is not so petty. We achieve salvation through God's grace, and it is simply impossible -- spiritually, physically, logically impossible -- to receive Grace without believing that it is real. Grace is like love. (In fact, Grace is love.) It is not possible to receive someone's love if you don't believe that they love you.
R: Of course it is. When I was growing up there were many occasions when I was convinced that my parents hated me, like when they made me eat my lima beans for example. But that didn't change the reality that in fact they did love me, and that I was the beneficiary of that love.
T: Hm, that's actually not a bad metaphor. I may want to use it in my next sermon on theodicy. Yes, your parents did love you even as they watched you gag on your lima beans, just as God loves us even as he watches us suffer. But as long as you were angry with your parents your relationship with them was imperfect. To fully realize a loving relationship there has to be both love and a belief that that love is real.
R: Agreed. But the problem is that the nature of God's love is not so clear.
T: It is quite clear to me.
R: Yes, and if everyone in the world could easily achieve the same level of clarity we would not be having this discussion. Just as we don't have to spend a lot of time arguing about basketballs.
T: Well, that is why I wrote my book.
R: Which brings me back to the same point: why was it necessary for you to write your book? Why is God's Word so obscured that it requires so many books to be written about it?
T: Because love is more complicated than basketballs.
R: To be sure. But why didn't God write your book? Why didn't God communicate his Word in such a way that it would be understood without the need for all this additional clarification? Either it is possible to communicate the Word in a way that will be understood or it is not. If it is, why didn't God just communicate that way to begin with? And if it isn't, aren't your efforts futile?
T: Maybe God is using me (and my book) to do exactly what you suggest.
R: That is possible. But I read your book and found it utterly unconvincing, so God still doesn't seem to be doing a very good job of getting through, at least not to me.
T: God is not going to force himself on you. You have to let him in.
R: I would like nothing better. Truly, if God is real, I want to know. But I've read the Bible (not every word, but a lot of it) and I've read your book and many others besides, and I am still not convinced.
T: Not convinced of what?
R: Of the central tenets of Christianity (as described by you in your book): that a triune God created me in His image, that I am separated from God by sin, that God became man and died on the cross to redeem me. I don't believe any of that.
T: Can you tell me why?
R: I could, but that would be a very long conversation, and it would be mostly tangential to the real point I want to make.
T: How about just a few examples. It would be helpful for me to know where my book falls short.
R: Very well, if you insist. There are so many problems it is hard to know where to begin. Let's see. How about this. In chapter 6 you address the issue of the apparent conflicts between science and the Bible. (I applaud you for taking on this issue by the way. It is very important.) You write:
"[I]t is false logic to argue that if one part of the scripture can't be taken literally then none of it can be. That isn't true of any human communication."
What you say is true, but it undermines your position in two ways. First, the Bible is not (according to you) a human communication. It is the inerrant Word of God.
T: It is still a human communication. The Bible was written by humans, albeit inspired and guided by God.
R: OK, but that leaves you with the second problem: if not all of the Bible can be taken literally (and I will note in passing that not who call themselves Christian will concede that) then you are left with the problem of deciding which parts can and which cannot. How can you possibly make those decisions? Logically there are only two alternatives. Either you take the fundamentalist position that the Bible is perfect and every word should be taken literally, or you have to rely on some extra-Biblical authority to pass judgement on how any given Biblical passage is to be interpreted (because the Bible itself provides no explicit guidance in that regard).
T: It's pretty clear that parts of the Bible are just poetry. The Song of Solomon for example...
R: Maybe it's clear to you. It's not clear to a fundamentalist. And it's certainly not clear to me. (As far as I can tell, the entire Bible is nothing but a collection of bronze-age myths.) How do we resolve this conflict? We obviously can't rely on the Bible to do it.
T: There is independent corroboration for much of what the Bible says. And in particular, there is overwhelming evidence that what the Bible has to say about Jesus is really true, and that is what really matters.
R: You've changed the subject, but I'll let that slide. Very well, let's talk about Jesus. Some of the things you say are just flat-out wrong. For example, you write:
Jesus's miracles ... were never [just] magic tricks.... You never hear him say something like, "See that tree over there? Watch me make it burst into flames."
It is ironic that you would choose that example, because that is almost exactly what Jesus does in Matthew 21:18-22.
You also write, in support of the Biblical account of the Resurrection:
For a highly altered, fictionalized account of an event to take hold in the public imagination it is necessary that the eyewitnesses (and their children and grandchildren) all be long dead.
First, it is far from clear that the (alleged) eyewitness accounts of the Resurrection took hold "in the public imagination" before the (alleged) witnesses (and their children and their grandchildren) were dead. Even by your own reckoning, the very earliest accounts of the Resurrection were not written until ten or fifteen years after it had happened, and the earliest Gospel (Mark) was not written until thirty or forty years later. Furthermore, the earliest known copies of Mark do not include an account of the Resurrection!
Second, there are eyewitness accounts of all kinds of things that (almost certainly) didn't actually happen. Bigfoot. Alien abductions. Witchcraft. The miracles of Mohammed.
Third, there are internal inconsistencies in the Resurrection accounts. For example, Mark reports Mary, Mary and Salome finding Jesus's empty tomb in great detail, even quoting the "young man dressed in a white robe sitting on the right side." (Who was this young man? He couldn't have been an angel because the Bible says unequivocally that he was a man.) But then it says, "Trembling and bewildered, the women went out and fled from the tomb. They said nothing to anyone, because they were afraid."
If they said nothing to anyone, how did the author of Mark know what had transpired?
It gets worse. 1 Corinthians 15:5 reports that Jesus "appeared to Peter, and then to the Twelve." This is at odds with Mark, which does not report a separate appearance to Peter. Moreoever, who are "the Twelve"? Presumably these are the twelve disciples. But there is one little problem: one of the Twelve was Judas Iscariot, and Judas was already dead, having hanged himself three days earlier. (Mark gets this right, saying that Jesus appeared "to the eleven.")
1 Corinthians then goes on to say that Jesus, "...appeared to more than five hundred of the brothers at the same time, most of whom are still living, though some have died." You cite this as evidence that the Resurrection must have happened because:
Here Paul... lists the eyewitnesses. Paul indicates that the risen Jesus not only appeared to individuals and small groups, but also appeared to five hundred people at once, most of whom were still alive at the time of his writing and could be consulted for corroboration.
But Paul does not "list" the eyewitnesses! He only says that there were 500 of them. He doesn't say who they actually were. So how exactly would one consult them? (To say nothing of the fact that the 500 were "of the brothers", that is, they were believers, and so any account they had of seeing their spiritual leader risen from the dead would be suspect to say the least.)
Fourth, you argue that the Resurrection must have happened because it was a singular event in history. You write, "the Christian view of resurrection, absolutely unprecedented in history, sprang up full-blown immediately after the death of Jesus." But Jesus's resurrection was not unprecedented. There was at least one other resurrection that preceded it: Lazarus was also raised from the dead. Not only that, but Lazarus was resurrected after being dead four days, not just three. So not only was Jesus's death not unprecedented even by Biblical standards, he didn't even set the record for longest time dead before coming back!
T: But Lazarus was resurrected by Jesus!
R: Why should that matter? Don't forget, we're not discussing Jesus's ability to perform miracles here, we're discussing whether or not the Resurrection really happened. If you wish to argue that we should believe in the historicity of the resurrection in part because it was an unprecedented event, how do you account for the fact that A) it was not an unprecedented event according to the Bible and B) Lazarus's resurrection -- which was even more remarkable in and of itself than Jesus's (because it came first and Lazarus had been dead longer) -- attracted no historical attention at all? There is not a single independent account of it anywhere outside of the Gospel of John (which, by the way, was written decades after these events supposedly took place). It is inescapable. The more I study the Bible the clearer it becomes to me that it has only the most tenuous grounding in historical fact. (And I'm not the only one. Bart Ehrman thinks so too, and he's a born-again Christian! Or at least he was.)
T: My, you've covered a lot of territory here. May I respond?
R: You may, but it's important to keep in mind that my aim here was not to convince you that the Resurrection didn't happen. I can't prove that, and I know there's no hope of convincing you that I am right. My aim is simply to show you some of the reasons that *I* don't believe in the Resurrection, and to hopefully convince you that I've come to this conclusion not out of ignorance or prejudice but after careful study and consideration. I don't want to convince you that I am right. My hope is only to convince you that my position is defensible, that the case for the resurrection is not quite the slam-dunk you say it is.
T: Well, I think it is a slam-dunk, but I will grant that you seem to have given it careful consideration, even if I think you've reached the wrong conclusion.
R: That's good enough for now. I don't want to reach agreement about the Resurrection (because that's hopeless). What I hope to reach agreement on is simply the proposition that reasonable people can disagree about it in a way that reasonable people cannot disagree about, say, the existence of this basketball here. Will you concede that much?
T: I am very reluctant to do so, but this discussion (and my empty stomach) has left me emotionally drained, so I suppose I will for now. I feel sorry for you.
R: Really? Why?
T: Because you do not know God's love. You must be a very empty person. If you don't believe that you were created by God in his image then you must believe that you are just some kind of cosmic accident whose existence has no purpose or meaning.
R: I believe no such thing, and I will thank you not to make such presumptions.
T: Under the circumstances I think it's unfair for you to take me to task for "my" choice of words.
R: But those words are an almost direct quote from your book: "[T]he nonexistence of God ... not only makes all moral choices meaningless, it makes all life meaningless too."
T: Well, doesn't it? If we are just random agglomerations of matter, what can possibly provide life with transcendent meaning?
R: Information.
T: I'm afraid you lost me.
R: Let me explain. You believe that a person has intrinsic value.
T: That's right, because we are created in the image of God.
R: Does a person's intrinsic value diminish in any way if, say, they lose an arm or a leg?
T: No, of course not.
R: How about an eye?
T: No, of course not.
R: Both eyes?
T: A person's intrinsic value is not diminished no matter how many body parts they lose. That we are created in God's image does not mean that we are physically like God, it means that we are spiritually like God, that we are capable of love...
R: What if they lose their heart?
T: I assume you don't mean that in the poetic sense.
R: No, I mean it literally. Does a person with an artificial heart have any less intrinsic value than someone with a biological heart?
T: The idea that love is resident in the heart is just a fanciful metaphor. The heart is just a pump, and losing it no more diminishes a person's intrinsic value than losing a limb, or even a fingernail.
R: For what it's worth, I'm pretty sure most atheists would agree with that. Now for the last question: what if someone loses their brain? Imagine, say, a drowning victim who is rescued, but not before their brain has been deprived of oxygen. Their body is revived. They are breathing. Their heart is pumping. But there is no detectable activity in their brain, and all medical indications are that it has been damaged beyond repair. They are "brain-dead." Is this person's intrinsic value diminished?
T: That is a very difficult question.
R: Indeed, and I actually don't need you to answer it. But let me give you another example which might make it easer to reach a conclusion. This is a true story. There once was a woman named Henrietta Lacks who died in 1951 from cervical cancer. But before she died some of her cancer cells were cultured, and the descendants of those cells are still alive. They are human cells. They have a full complement of human DNA (specifically, Henrietta's DNA). But I trust you would agree that those cells do not have the same intrinsic value as an intact human being.
T: This is getting quite morbid.
R: I'm sorry about that, but I don't know of a gentler way to make this point. Henrietta's cancer cells are alive. They are life. Moreoever, they are human life. But I think you would be very hard-pressed to find a lot of people who believe that they have the intrinsic value of a person. So there is something wrong with the slogan, "All human life is sacred."
T: I think that slogan means, "All human beings are sacred."
R: Exactly. Or to put it another way, all persons are sacred, or have intrinsic value, or whatever you want to call it. But that raises the question: what makes a person? And I submit to you that what makes a person is not necessarily that they were created in the image of God. There is an alternative, principled scientific account of what makes a person special, namely, that they have a functioning brain. Our DNA makes us human but it is our brains that make us people.
T: I still don't see what this has to do with the idea that "information" is what gives life transcendent meaning.
R: That's understandable. I'm not using the term in it usual everyday sense. I don't mean that, for example, the information you find in, say, a phone book gives life meaning. I mean it in a much broader and technical sense, in the way that computer scientists or mathematicians mean it. I mean it in the sense in which it answers the question: what makes brains special? And the answer to that question is: brains are special because of their capacity to process information.
T: I think that is quite possibly the most ridiculous thing I have ever heard.
R: Really? Why?
T: Well, for starters, computers process information too. Does that mean that computers should be considered human? Even DNA contains information. By that argument, Henrietta Lacks's cancer cells should be considered human. No, I'm afraid you've gone completely off the rails here. The reason brains are special is because they are the conduit to the human soul, and it is the soul that makes humans special, not brains per se.
R: It is true that computers (and DNA) process information. But you have made a basic logical fallacy. Brains are special because they process information. It does not follow that everything that processes information is special the way brains are. This is called the converse accident fallacy. Human brains are special because they can process information in ways that no computer can (yet). Also, I should point out that you are using the term "human" where you should be using the term "person." It is the soul (or brains) that makes people special. It is an important distinction.
T: Why?
R: Two reasons. First, sloppy thinking about humans vs people leads you to all kinds of ethical conundrums, like whether or not Henrietta Lacks's cancer cells should be accorded human rights. If you distinguish the concept of "person" from the concept of "human" those kinds of dilemmas simply evaporate because everyone can agree that while Henrietta's cells might be human, they are most assuredly not a person. And second, it leaves one open to the possibility of some day encountering a person who is not human.
T: Wasn't it you who was making disparaging comments about Bigfoot earlier in the conversation?
R: I'm not talking about Bigfoot. I'm talking about the possibility of encountering intelligent life on other planets, or even creating artificial intelligence here on this planet. Suppose you met an intelligent alien, would you accord it "human" rights? From a religious point of view this would be quite a sticky issue, but from the information-centric point of view it would not. An intelligent alien would be a person by virtue of its having a brain (or something equivalent that was the seat of its intelligence). As far as we know at this point, all persons are human. But it won't necessarily always be so.
T: Your reasoning is circular. You've used the information-processing capabilities of human brains to define the threshold necessary to be considered a person, and then used that definition to conclude that humans have intrinsic value. You could just as easily have skipped a step and just said that humans have intrinsic value to begin with.
R: It's not circular because information (and information-processing capability) can be objectively defined independently of humans. And using this human-independent definition it is quite clear that human brains are special. Human brains can do things that nothing else in the known universe can do. They can talk (and listen). They can laugh. They can imagine. Those are quite amazing feats.
T: More evidence of God's design.
R: Or the complexity that evolution guided by natural selection is capable of producing. The point is (and this is important) we don't have to agree on how human brains got to be the way they are in order to agree that human brains are special. Furthermore, we can also agree that human brains are special at least in part because of what they do, and not because of how they came to be.
T: I'm still waiting for you to get to the part about transcendent meaning.
R: Once you accept that brains are special because of what they do that leads you inexorably towards many of the same moral and metaphysical conclusions that religion does. "Human rights" (which should properly be called "person's rights") for example, follow directly from the proposition that brains are special. People have rights because people have brains and brains are special. Killing a person is wrong because by killing a person you destroy his brain and that's bad because brains are special.
And it's not the brain per se, it's really the information stored inside that brain that's the really valuable commodity, because that is what makes us who we are. So Alzheimer's is bad because it destroys the information in a brain while leaving the brain itself (and the body that contains it) intact. You can derive other moral principles from this as well. For example, true information (usually) has more value than false information, which is why lying is (usually) bad.
The point is, you don't have to believe in God or a soul to reach (nearly) all of the same conclusions about life and its value and its purpose as religious people do. All you have to believe in is the specialness of brains. And that is a much easier thing to get people to agree on.
T: Well, I can't say I can find any overt flaw in your reasoning, but I can't say it makes me feel very warm and fuzzy. The idea that I am a child of God gives me much more comfort than the proposition that "I am my brain."
R: But that's actually part of the beauty of science. It doesn't demand faith. Science works whether you believe in it or not. That's another reason why science, not religion, is a much better basis for reconciling the rift between science and religion than religion is. Religion only works for believers. Science works for everyone.
T: Saying that science is right is not exactly what I'd call reconciliation.
R: I didn't say that science is right. I said it was a better basis for reconciliation in part because it does not demand faith. To serve this purpose it doesn't really matter whether or not it is right, all that matters is that everyone agree. And I submit to you that it's going to be a lot easier to get people to agree to the principle that brains are special than to agree to Christianity. In fact, it's probably not much harder to get people to believe in the specialness of brains than it is to agree to the existence of basketballs. Furthermore, the essential elements of the Scientific (note the capital S) teleology are completely compatible with many religious beliefs. You can be a Christian and still accept the proposition that brains are special as a foundation for morality.
T: Say, what happened to that basketball anyway? It doesn't seem to be around here any more.
R: Never mind that now. I think we've done a good day's work here, and I'm famished. Shall we adjourn and grab a bite to eat?
T: Sounds good to me. I know a great little Mexican place around the corner.
R: Hm, beans give me gas. How about Sushi?
T: Never touch the stuff. Italian? There's a place down the street that makes a great osso bucco.
R: I don't eat veal ever since I learned how it is made. Tell you what. How about we just grab a sandwich and go sit under a tree?
T: Sounds good to me.
R: Well, at least we agree on that.
T: It's a start.
---
Postscript: I sent a draft (very nearly identical with the version above) of this essay to Tim Keller (the real one). This is how he responded (via his assistant):
Dear Ron,
Tim looked the dialog over, and he doubts he would have responded to your questions the way you have him responding. He thinks that the imagined dialog would be misleading if it supposed to represent what he would say if he was actually asked that series of queries. He adds that he doesn't think he knows anyone well enough to be sure he could imagine how he or she would actually respond to a long set of real-time questions like that.
Tim is sorry he doesn't have the time to respond to the questions himself. He appreciates your effort and your willingness to show it to him.
Thanks so much.
And then a little while later I got this:
Ron,
One more thought from Tim...
since he doesn't believe he would answer these questions in this way, it wouldn't be right to post this as if he had said these things, since he hasn't and he wouldn't. Thanks, Ron.
To which I responded as follows:
[The "real" Tim Keller (T2) bursts into the room.]
T2: Who is this imposter?
T1: Hello, my name is Tim Keller. Who are you?
T2: You're not Tim Keller, *I'm* Tim Keller.
T1: Why, so you are. I am Tim Keller as imagined by Ron Garret. But it's very good of you to join us. Shall I bow out now?
T2: No, I have a bone to pick with you.
Ron: You shouldn't blame him. I'm really the one you should be angry with. He didn't really have a choice in the matter.
[T2 regards T1 with a quizzical scowl.]
T2: Nothing he's said has been what I would have said. He doesn't even look like me. His nose is all wrong.
R: I'm sorry, I did the best I could under the circumstances. All I had to work with was your picture on Google Video, and the image quality is not the best. But perhaps you'd like to take this opportunity to set the record straight?
T2: No, I'm sorry, I don't have time for that. I'm a very busy man.
R: Then what is it you expect me to do?
T2: Dispose of him.
T1: I'm not sure I like the sound of that.
R: You want me to kill him?
T2: No, I don't want you to kill him. Don't be ridiculous. You can't kill him. He isn't real.
R: Well, despite the fact that he isn't real, I've grown rather fond of him, and I would prefer to keep him around.
T1: Why, thank you!
T2: I don't think that's right.
R: Why not?
T1: Because he doesn't answer questions the way I would, and so it is disingenuous of you to present him as if he were me.
R: Well, *of course* he doesn't answer questions the way you would. He's just a figment of my imagination.
T1: Excuse me, but would you please stop talking about me as if I weren't in the room?
R: Sorry. OK, look, I'll make him go away.
[T1 vanishes in a puff of smoke.]
R: Are you happy now?
T2: No. I want you to expunge all memory of him. I want you to make it as if he never existed. I don't want anyone to ever know about him.
R: You want me to disown my creation?
T2: Yes.
R: And why should I do that?
T2: Because you present him as if he were me, and he isn't.
R: Are you saying that the positions he takes are not your positions?
T2: Yes.
R: Can you be specific? I really tried very hard to represent your views accurately. I can cite you chapter and verse (so to speak) to show that every position he took is supported by something you wrote in your book. Can you tell me where I got it wrong?
T2: No, sorry, I'm a very busy man. No time for that.
R: Well, I'm afraid that leaves me in a very difficult position. And I'm disappointed too. I would have thought you would appreciate this rhetorical device I've chosen, particularly since it was actually your idea.
T2: What? That's ridiculous. I never suggested that you write me into a dialog.
R: That's true, but you did come up with the metaphor of God writing himself into the script. It's in one of the later chapters of your book. (That idea was not original with you, by the way. Douglas Hoftstadter used the same device back in the 70's, and for all I know it goes back further than that.) I had hoped you'd see the dialog format as a small homage, not as an insult.
T2: Hmmm....
[Suddenly the *real* real Tim Keller (T3) bursts into the room.]
T3: What's going on here? Who is this? He looks familiar.
R: Ah, Rev. Keller, good of you to join us. Won't you sit down?
I leave it to you to write the next line.
---
To date, Tim Keller has not responded.
Praise for the Prius
We pulled in to what is very likely the very last pump-first-pay-later gas station in the United States, possibly the world and topped it off. By the time gas was bubbling out of the filler hose we'd squeezed 2.7 gallons into the tank.
2.7 gallons!
Holy shit! We'd been averaging more than 70 miles per gallon! I would not have believed it if I hadn't seen it with my own eyes.
The most amazing thing about is that these were not 200 highway miles. These were 200 miles in the city and an badly overcrowded state highway. These were 200 frustrating stop-and-go miles. It turns out that the Prius, probably because of the regenerative braking system, actually uses less gas in the city than on the highway. I am duly impressed.
If only it didn't look so goofy I might actually buy one for myself.
Friday, April 25, 2008
This is how prophecy is supposed to work
Tuesday, April 01, 2008
Like father, like daughter
“Wow, you’re the first person actually that’s ever asked me that question in the, I don’t know, maybe 70 college campuses I’ve now been to, and I do not think that is any of your business,” [Chelsea] Clinton said.
And if the question had actually been about Monica I would have had a certain amount of sympathy for this position. But (if Fox News is to be believed, which is always a big IF) it wasn't. The question was about Hillary's credibility in light of her comment about the "vast right-wing conspiracy." According to Fox:
"A male questioner earned a terse response when he asked whether her mother’s credibility had been hurt during the scandal. Before learning the truth about her husband’s relationship with Lewinsky, the former first lady had claimed the allegations against him were fabricated by a “vast right-wing conspiracy.”
That question is very much in bounds, because the fact of the matter is that Bill Clinton did "have sexual relations with that woman, Miss Lewinski."
I am beginning to think that the entire Clinton family is nothing but a pack of pathological liars.
Saturday, March 29, 2008
Hooray for Hollywood
I arrived way too early for the panel, so I decided to pop over to the Kodak Theatre to grab a bite to eat. The place was crawling with tourists taking pictures of the out-of-work actors in front of Grauman's who hustle for tips by dressing up like famous movie characters. I had a brief chat with a convincing but rather dispirited Albus Dumbledore, who had been out of work for weeks as a result of the writer's strike. That is just not an experience you could ever hope to have in Mountain View.
I once told Paul Graham that one of the things I love about LA is its phoniness, but that wasn't really the right word. The word I should have used but didn't for fear of coming across too starry-eyed was magic. I love LA's magic -- but not in the phony Disney-esque sense of the word. I mean in the real professional-magician sense of the word. The kind of magic a magician does is not real magic. It's acting. It's sleight-of-hand and misdirection and when it's done right it gives a very convincing illusion of magic happening before your eyes. A really good magician makes it look like magic even if you know how the trick is done.
The film industry is all about creating those kinds of illusions. Everyone knows that ET is just a muppet, and that bicycle isn't really flying across the moon, but film can create mighty convincing illusions, and in that sense film is magic. Not "real" wish-upon-a-star kind of magic, but professional-magician kind of magic.
The trick with film is that it takes its magic to a whole nuther level from traditional magic. Traditional magic is designed to elicit only one emotion: wonder. But film can elicit the entire panoply of human emotion: laughter, fear, love, joy, sorrow. What's more, those emotions are real. And because the emotions are real, the illusions that evoked them seem all that much more real and vivid.
There's a kind of delicious and fearsome danger in the extent to which movies present convincing illusions. It makes it hard, for example, to separate actors from the characters they play. People love movie stars because they think they know them when what they really know are the characters they play. Sometimes it becomes difficult even for actors to separate themselves from their characters, because to really act well you have to get yourself to actually feel the emotions that you are portraying. Good acting isn't really acting, it's genuinely feeling, which is one of the reasons that acting can be such an emotionally taxing line of work.
This perilously fuzzy line between illusion and reality permeates the culture of the entire city of Los Angeles. Culturally, it is the diametric opposite of the culture in the Silicon Valley, where everything revolves around the harsh objective reality of MOSFET transistors. People in Silicon Valley are very good at slinging bits and crunching numbers, and are generally (except for VC's) pretty earnest and straightforward. People in LA are the exact opposite. There's a huge amount of phoniness, but it's a particular kind of phoniness that I find fascinating and wonderful. Everyone is dreaming of being a movie star or a producer or a director or selling their screenplay. The fascinating thing about it is that the reality of being any of these things is nothing at all like the dream. Frank Darabont has been nominated for three or four Oscars, and he has to hustle for money to make his next feature just like anybody else. His rolodex is a bit fatter, but other than that he's pretty much in the same boat as anybody else. And we're all driven by this desire to make a particular kind of magic happen, of rendering a transcendent experience on film. That what I feel when I look up at the Hollywood sign, and that's why I love LA. The Silicon Valley is cool, but it's not magic.
Wednesday, February 27, 2008
I suppose they think setting the building on fire for an evacuation drill would be a good idea too
An armed man who burst into a classroom at Elizabeth City State University was role-playing in an emergency response drill, but neither the students nor assistant professor Jingbin Wang knew that.
The Friday drill, in which a mock gunman threatened panicked students in the American foreign policy class with death, prompted university officials to apologize this week to Wang and offer counseling to faculty and students.
I suspect they're going to be offering a lot more than counseling before this is through.
Complete story [here].
Friday, February 22, 2008
The meaning of innovation
Oh, and this is pretty damn clever too. Things like this give me real hope for the future.
Thursday, February 14, 2008
The litigant that would not die
I don't normally wish my fellow investors ill, but in this case I'll make an exception and say that whoever put up the cash to bring SCO back from the dead deserves to lose every penny. SCO is a parasite, and it should be expunged form the world along with tapeworms, mosquitoes, and malaria protozoa.
CLS
CLS derives its name from the scene in the classic movie The Wizard of Oz where we are introduced to the Cowardly Lion. Not having the courage to stand up to Dorothy, Scarecrow and the Tin Man, the Cowardly Lion decides to prove his manliness (his Lionliness?) by chasing after Dorothy's little dog Toto instead.
CLS is rampant, particularly in American politics. The most recent example is Congress, not having the courage to stand up to the Bush Administration or the Telcos, decide to chase after Roger Clemens instead. Before that it was Rush Limbaugh and Terri Schiavo and gay people and cancer patients.
Unfortunately, there is not yet a drug approved by the FDA to treat Cowardly Lion Syndrome. I wonder if Congress could work up the courage to fund a study.
Saturday, February 09, 2008
As ye sow...
On that theory, we should be having a Joseph Welch moment one of these days. Maybe this is it.
Thursday, February 07, 2008
What's the right quality metric?
"The reason I'm focusing on the region between axioms and libraries is that, from the programmer's point of view, these operators are the language. These are what your programs are made of. If Lisp were a house, these operators would be the front door, the living room sofa, the kitchen table. Small variations in their design can greatly affect how well the language works.
I've taken a lot of heat for focusing on this"
For the record, I think focusing on this is absolutely the right thing to do, and I applaud Paul for standing up to criticism and doing it. The only thing I disagree with him on is this:
By definition, there is some optimal path from axioms up to a complete language.
This is true only with respect to a particular quality metric, and I think that the quality metric Paul has chosen is brkn.
What do I suggest instead? Glad you asked.
First, let's go back and revisit briefly the question of what programming languages are for. Paul takes as an axiom that programming languages are for making programs shorter but I say they are for making programs easier to create and maintain, which may or may not be the same thing as making them shorter. Certainly, all else being equal, shorter is better, but all else is rarely equal. Sometimes adding length buys you something, and you have to make the tough call about whether the benefit is worth the cost.
It is ironic that Paul would choose such a simplistic metric after going to such great lengths to make the point that programming is like painting. I think this is a great insight, but one of the conclusions is that assessing the quality of a program, just like assessing the quality of a painting, is not such an easy thing to do. Paul's quality metric for programs is analogous to assessing the quality of a painting by counting the number of brushstrokes, and just as absurd. It is a genuine puzzle that Paul of all people needs to be convinced of this.
Programming is all about translating thoughts in people's brains into bits. There is a huge impedance mismatch between brains and bits, and programming languages help to bridge the gap. But there is no more a one-size-fits-all quality metric for programming languages than there is for paintings, or anything else that involves thoughts. There will always be different strokes for different folks.
That is not to say that we need to descend into the black hole of postmodern relativism and say that it's all just a matter of opinion and interpretation. I think there is a criterion that can be used to effectively compare one language against another, but it is very tricky to state. Here's the intuition:
If language A can do everything that language B can do, and language A can do something than language B cannot do, then language A is superior to language B.
(Note that this criterion can produce only a partial-ordering of language quality, but I think that is both inevitable and OK.)
The problem with this as stated is that all languages are Turing-equivalent, so from a strictly mathematical point of view all programming languages are the same. What I mean by "cannot do" is what cannot be done without transcending the framework of the language. Here are some examples:
1. C cannot throw exceptions.
2. Common Lisp cannot support Arc's composition operator (both because the interpretation of an unescaped colon within a symbol name is hard-coded in the standard to have a particular meaning, and because the semantics of function calls are defined within the standard in such a way that calling a composed function would be a syntax error).
3. Arc cannot (as it currently stands) dereference an array in constant time.
4. You can't add new control constructs to Python.
The obvious drawback to my criterion as stated is that it prefers large, unwieldly, kitchen-sink languages to small, elegant languages. So I'll adopt a version of Paul's minimalist metric as a secondary criterion:
If two languages support essentially the same feature set, but language A is in some sense "smaller" than language B, then language A is superior to language B.
In other words, parsimony is desirable, but only secondarily to features (or, more precisely, a lack of constraints). On this view, it is clear why macros are such a big win: macros are a sort of "meta-feature" that let you add new features, and so any language with macros has a huge leg up (in terms of my quality metric) over any language that doesn't.
So why am I so reticent about Arc? After all, Arc is pretty small, but it has macros, so it should be able to subsume other languages. And because it will subsume other languages in a minimalist way it will be the best language. Right?
Well, maybe. There are a couple of problems.
First, not all macro systems are created equal, and Arc's macro system has some known problems. How serious those problems are in practice is, perhaps, an open issue, but my quality metric doesn't take such things into account. A language that solves a problem is in my view better than a language that leaves it up to the user to solve that problem, even if that problem doesn't arise very often. (I dislike Python's syntactically-significant whitespace for the same reason, and despite the fact that it mostly works in practice.)
Second, there is more than one kind of macro. Common Lisp, for example, has symbol macros, which turn out to be tremendously useful for implementing things like module systems, and reader macros which let you change the lexical surface syntax of the language if you want to, and compiler macros which let you give the compiler hints about how to make your code more efficient without having to change the code itself.
Finally, there are language features like type systems which, I am told by people who claim to understand them (I'm not one of them), are quite spiffy and let you do all manner of cool things.
That's the real challenge of designing the 100-year language. In order to make sure that you're not designing yet another Blub you may need to be able to support features that you don't actually understand, or maybe have never even heard of (or maybe haven't even been invented yet). To think otherwise is, IMO, to deny the vast richness of programming, indeed of mathematics itself.
This is totally unacceptable

The New York Times reports:
"An article about the Prophet Muhammad in the English-language Wikipedia has become the subject of an online protest in the last few weeks because of its representations of Muhammad, taken from medieval manuscripts.
In addition to numerous e-mail messages sent to Wikipedia.org, an online petition cites a prohibition in Islam on images of people.
The petition has more than 80,000 “signatures,” though many who submitted them to ThePetitionSite.com, remained anonymous.
“We have been noticing a lot more similar sounding, similar looking e-mails beginning mid-January,” said Jay Walsh, a spokesman for the Wikimedia Foundation in San Francisco, which administers the various online encyclopedias in more than 250 languages.
A Frequently Asked Questions page explains the site’s polite but firm refusal to remove the images: “Since Wikipedia is an encyclopedia with the goal of representing all topics from a neutral point of view, Wikipedia is not censored for the benefit of any particular group.”
The notes left on the petition site come from all over the world. “It’s totally unacceptable to print the Prophet’s picture,” Saadia Bukhari from Pakistan wrote in a message. “It shows insensitivity towards Muslim feelings and should be removed immediately.”"
I applaud Wikipedia for taking a firm stand on this. In solidarity with them I am posting one of the images in question hosted on my own server. It is totally unacceptable to ask for this image to be removed. It shows insensitivity towards the feelings of tolerant and freedom-loving people throughout the world (to say nothing of bad theology in light of the fact that the image in question was painted by a Muslim!) The demand that this image be removed should be withdrawn immediately.
Z SHRTR BTTR?
Central to this flurry of activity is Paul Graham's quality metric, that shorter (in terms of node count) is better. In fact, Paul explicitly says "making programs short is what high level languages are for."
I have already posted an argument against this position, citing APL as an example of a very concise language that is nonetheless not widely considered to be the be-all-and-end-all of high-level languages (except, perhaps, among its adherents). Here I want to examine Paul's assumption on his own terms, and see what happens if you really take his quality metric seriously on its own terms.
Paul defines his quality metric at the end of this essay:
The most meaningful test of the length of a program is not lines or characters but the size of the codetree-- the tree you'd need to represent the source. The Arc example has a codetree of 23 nodes: 15 leaves/tokens + 8 interior nodes.
He is careful to use nodes rather than characters or lines of code in order to exclude obvious absurdities like single-character variable and function names (despite the fact that he has a clear tendency towards parsimony even in this regard). So let's take a look at Arc and see if we can improve it.
One of the distinguishing features of Arc compared to other Lisps is that it has two different binding forms that do the same thing, one for the single-variable case (LET) and another for the multiple-variable case (WITH). This was done because Paul examined a large corpus of Lisp source code and found that the traditional Lisp LET was used most of the time to bind only a single variable, and so it was worthwhile making this a special case.
But it turns out that we can achieve the exact same savings WITHOUT making the single-variable case special. Let's take a traditional Lisp LET form:
(let ((var1 expr1) (var2 expr2) ...) form1 form2 ... formN)
and just eliminate the parentheses:
(let var1 expr1 var2 expr2 ... form1 form2 ... formN)
At first glance this doesn't work because we have no way of knowing where the var-expr pairs stop and the body forms start. But actually this is not true. We CAN parse this if we observe that If there is more than one body form, then the first body form MUST be a combination (i.e. not an atom) in order to be semantically meaningful. So we can tell where the var/expr pairs stop and the body forms begin by looking for the first form in a varN position that is either 1) not an atom or 2) not followed by another form. If you format it properly it doesn't even look half bad:
(let var1 expr1
var2 expr2
...
(form1 ...)
(form2 ...)
)
OR
(let var1 expr1
var2 expr2
...
result)
In order to avoid confusion, I'm going to rename this construct BINDING-BLOCK (BB for short). BB is, by Paul's metric, an unambiguous improvement in the design of ARC because it's the same length (in terms of nodes) when used in the single-variable case, and it's a node shorter when used in the multiple-variable case.
In fact, we can do even better by observing that any symbol in the body of a binding block that is not at the end can be unambiguously considered to be a binding because otherwise it would not be semantically meaningful. So we can simplify the following case:
(bb var1 expr1
(do-some-computation)
(bb var2 expr2
(do-some-more-computation)
....
to:
(bb var1 expr1
(do-some-computation)
var2 expr2
(do-some-more-computation)
....
Again, this is an unambiguous win because every time we avoid a nested binding block we save two nodes.
But notice that something is happening that ought to make us feel just a teensy bit queasy about all these supposed "improvements": our code is starting to look not so much like Lisp. It's getting harder to parse, both for machines and humans. Logical sub-blocks are no longer delimited by parentheses, so it's harder to pick out where they start and end.
Still, it's not completely untenable to argue that binding blocks really are an improvement. (For one thing, they will probably appeal to parenthophobes.) I personally have mixed feelings about them. I kind of like how they can keep the code from creeping off to the right side of the screen, but I also like being able to e.g. select a sub-expression by double-clicking on a close-paren.
Can we do even better? It would seem that with binding-block we have squeezed every possible extraneous node out of the code. There simply are no more parentheses we can get rid of without making the code ambiguous. Well, it turns out that's not true. There is something else we can get rid of to make the code even shorter. You might want to see if you can figure out what it is before reading on.
Here's a hint: Arc already includes a construct [...] which is short for (fn (_) ...).
Why stop at one implied variable? Why not have [...] be short for (fn ((o _) (o _1) (o _2) ...) ...)? Again, this is an unambiguous improvement by Paul's quality metric because it allows us to replace ANY anonymous function with a shorter version using implied variable names, not just anonymous functions with one variable.
And now we should be feeling very queasy indeed. On Paul's quality metric, all those variable names are expensive (every one requires a node) and yet I hope I don't have to convince you that getting rid of them is not an unalloyed good. If you doubt this, consider that we can use the same argument convention for non-anonymous functions as well, and replace the lambda-list with a single number indicating the number of arguments:
(def foo 3 (do-some-computation-on _1 _2 _3))
Again, an unambiguous improvement by Paul's metric.
We can extend this to binding blocks as well, where we replace variable names with numbers as well. In fact, we don't even need to be explicit about the numbers! We can just implicitly bind the result of every form in a binding block to an implicitly named variable! Again, on Paul's quality metric this would be an unambiguous improvement in the language design.
All this is not as absurd as it might appear. If we push this line of reasoning to its logical conclusion we would end up with a language that very closely resembled Forth. Forth is a fine language. It has its fans. It's particularly good at producing very small code for use on very small embedded processors, so if you need to program an 8-bit microcontroller with 4k of RAM it's not a bad choice at all. But God help you if you need to debug a Forth program. If you think Perl is write-only, you ain't seen nothin' yet.
The flaw in the reasoning is obvious: adding nodes to program source has a cost to be sure, but it can also provide benefits in terms of readability, flexibility, maintainability, editability, reusability... and there's no way to make those tradeoffs unambiguously because they are incommensurate quantities. That is why language design is hard.
Now, I don't really mean to rag on Arc. I only want to point out that if Arc succeeds (and I think it could, and I hope it does) it won't be because it made programs as short as they can possibly be. It will be because it struck some balance between brevity and other factors that appeals to a big enough audience to reach critical mass. And I think Paul is smart enough and charismatic enough to gather that audience despite the fact that there might be a discrepancy between Paul's rhetoric and Arc's reality. But IMO the world would be a better place (and Arc would be a better language) if the complex tradeoffs of language design were taken more seriously.
[UPDATE:]
Sami Samhuri asks:
How do you account for the following case?
(let i 0
(x y) (list 1 2)
(prn "i is " i)
(prn "x is " x)
(prn "y is " y))
Well, the easiest way is (bb i 0 x 1 y 2 ...
;-)
Or, less glibly: (bb i 0 L (something-that-returns-a-list) x (1st L) y (2nd L) ...
Of course, if you want to support destructuring-bind directly you need some kind of marker to distinguish trees of variables from forms. In this case, Arc's WITH syntax, which essentially uses a set of parens to be that marker, becomes tenable again (though it doesn't actually support destructuring even though the syntax could easily be extended to support it). Another possibility is to overload the square-bracket syntax, since that also would be a no-op in a binding position. Now things start to get even more complicated to parse, but hey, shorter is better, right?
But what happens if some day you decide that multiple values would be a spiffy feature to add to the language? Now you're back to needing a marker.
Personally, in my BB macro (in Common Lisp) use the :mv and :db keywords as markers for multiple-value-bind and destructuring-bind. So your example would be:
(bb i 0
:db (x y) (list 1 2)
; and just for good measure:
:mv (q s) (round x y)
...
I also use a prefix "$" character to designate dynamic bindings, which tends to send CL purists into conniptions. :-)
And to top it off, I use the :with keyword to fold in (with-X ...) forms as well. The result is some very un-Lispy-looking Lisp, but which shorter-is-better advocates ought to find appealing.
Here's the code for BB in case you're interested:
;;; Binding Block
(defmacro bb (&rest body)
(cond
((null (rst body)) (fst body))
((consp (1st body))
`(progn ,(1st body) (bb ,@(rst body))))
((not (symbolp (1st body)))
(error "~S is not a valid variable name" (1st body)))
((eq (1st body) ':mv)
(if (symbolp (2nd body))
`(let ((,(2nd body) (multiple-value-list ,(3rd body))))
(bb ,@(rrrst body)))
`(multiple-value-bind ,(2nd body) ,(3rd body)
(bb ,@(rrrst body)))))
((eq (1st body) :db)
`(destructuring-bind ,(2nd body) ,(3rd body)
(declare (special ,@(find-specials (2nd body))))
(bb ,@(rrrst body))))
((eq (1st body) :with)
`(,(concatenate-symbol 'with- (2nd body)) ,(3rd body) (bb ,@(rrrst body))))
((keywordp (1st body))
(error "~S is not a valid binding keyword" (1st body)))
(t `(let ((,(1st body) ,(2nd body)))
(declare (special ,@(find-specials (1st body))))
(bb ,@(rrst body))))))
Note that it would be straightforward to extend this to allow users to define their own binding keywords, which would really help things to spin wildly out of control :-)
Sunday, February 03, 2008
The joy of iterators
(defun print-elements (thing)
(etypecase thing
(list (loop for element in thing do (print element)))
(vector (loop for element across thing do (print element))))
This is not particularly elegant. The amount of repetition between the two clauses in the etypecase is pretty annoying. We need to copy almost the entire LOOP clause just so we can change "in" to "across". How can we make this prettier?
The usual fix for repetitive code is to use a macro, e.g.:
(defmacro my-loop1 (var keyword thing &body body)
`(loop for ,var ,keyword ,thing do ,@body))
But this hardly helps at all (and arguably makes the situation worse):
(defun print-elements (thing)
(etypecase thing
(list (my-loop1 element in thing (print element)))
(vector (my-loop1 element across thing (print element))))
We could do this:
(defmacro my-loop2 (keyword)
`(loop for element ,keyword thing do (print element)))
(defun print-elements (thing)
(etypecase thing
(list (my-loop2 in)
(vector (my-loop2 across))))
But that's not a very good solution either. We're not likely to get much re-use out of my-loop2, the code overall is not much shorter, and it feels horribly contrived.
Here's another attempt, taking advantage of the fact that Common Lisp has a few functions built-in that are generic to sequences (which include lists and vectors):
(defun print-elements (thing)
(loop for i from 0 below (length thing) do (print (elt thing i))))
That looks better, but it's O(n^2) when applied to a list, which is not so good. And in any case it completely falls apart when we suddenly get a new requirement:
Extend print-elements-in-order so that it can accept a hash table, and prints out the hash-table's keyword-value pairs.
Now the first solution is suddenly looking pretty good, because it's the easiest to extend to deal with this new requirement. We just have to add another line to the etypecase:
(defun print-elements (thing)
(etypecase thing
(list (loop for element in thing do (print element)))
(vector (loop for element across thing do (print element)))
(hash-table (loop for key being the hash-keys of thing do (print (list key (gethash key thing)))))))
Now we get yet another requirement:
Extend print-elements so that it can accept a file input stream, and print out the lines of the file.
I'll leave that one as an exercise.
Now, the situation seems more or less under control here, with one minor problem: every time we get a new requirement of this sort we need to change the print-elements function. That's not necessarily a problem, except that if we have more than one programmer working on implementing these requirements we have to be sure that they don't stomp on each other as they edit print-elements (but modern revision-control systems should be able to handle that).
But now we get this requirement:
Extend print-elements so that it takes as an optional argument an integer N and prints the elements N at a time.
Now all our hard work completely falls apart, because the assumption that the elements are to be printed one at a time is woven deeply into the fabric of our code.
Wouldn't it be nice if the LOOP macro just automatically did the Right Thing for us, so that we could just write, e.g.:
(defun print-elements (thing &optional (n 1))
(loop for elements in (n-at-a-time thing n) do (print elements)))
and be done with it? Can we make that happen? Yes, we can. Here's how:
(defconstant +iterend+ (make-symbol "ITERATION_END"))
(defmacro for (var in thing &body body)
(with-gensym itervar
`(let ( (,itervar (iterator ,thing)) )
(loop for ,var = (funcall ,itervar)
until (eq ,var +iterend+)
,@body))))
(defmethod iterator ((l list))
(fn () (if l (pop l) +iterend+)))
(defmethod iterator ((v vector))
(let ( (len (length v)) (cnt 0) )
(fn () (if (< cnt len)
(prog1 (elt v cnt) (incf cnt))
+iterend+))))
(defun print-elements (thing)
(for element in thing do (print element)))
The overall code is much longer than our original solution, but I claim that it is nonetheless a huge win. Why? Because it is so much easier to extend. In order to make print-elements work on new data types all we have to do is define an iterator method on that data type which returns a closure that conforms to the iteration protocol that we have (implicitly for now) defined. So, for example, to handle streams all we have to do is:
(define-method (iterator (s stream)) (fn () (read-char s nil +iterend+)))
We don't have to change any existing code. And in particular, we don't need to redefine print-elements.
Furthermore, we can now do this neat trick:
(defmacro n-of (form n)
`(loop for #.(gensym "I") from 1 to ,n collect ,form))
(defun n-at-a-time (n thing)
(let ( (iter (iterator thing)) )
(fn () (let ((l (n-of (funcall iter) n)))
(if (eq (car l) +iterend+) +iterend+ l)))))
It actually works:
? (for l in (n-at-a-time 2 '(1 2 3 4)) do (print l))
(1 2)
(3 4)
NIL
? (for l in (n-at-a-time 2 #(1 2 3 4)) do (print l))
(1 2)
(3 4)
NIL
Furthermore, n-at-a-time works on ANY data type that has an iterator method defined for it.
There's a fly in the ointment though. Consider this requirement:
Write a function that takes either a list or a vector of integers and destructively increments all of its elements by 1.
We can't do that using the iteration protocol as it currently stands, because it only gives us access to values and not locations. Can we extend the protocol so that we get both? Of course we can, but it's a tricky design issue because there are a couple of different ways to do it. The way I chose was to take advantage of Common Lisp's ability to let functions return multiple values. I extended the FOR macro to accept multiple variables corresponding to those multiple values. By convention, iterators return two values (except for the N-AT-A-TIME meta-iterator): the value, and a key that refers back to the container being iterated over (assuming that makes sense -- some data types, like streams, don't have such keys). This makes the code for the FOR macro and the associated ITERATOR methods quite a bit more complicated:
(defmacro for (var in thing &body body)
(unless (sym= in :in) (warn "expected keyword 'in', got ~A instead" in))
(with-gensym itervar
`(let ( (,itervar (iterator ,thing)) )
,(if (consp var)
`(loop for ,var = (multiple-value-list (funcall ,itervar))
until (eq ,(fst var) +iterend+)
,@body)
`(loop for ,var = (funcall ,itervar)
until (eq ,var +iterend+)
,@body)))))
(define-method (iterator (v vector))
(let ( (len (length v)) (cnt 0) )
(fn () (if (< cnt len)
(multiple-value-prog1 (values (elt v cnt) cnt) (incf cnt))
+iterend+))))
(define-method (iterator (h hash-table))
(let ( (keys (loop for x being the hash-keys of h collect x)) )
(fn ()
(if keys (let ( (k (pop keys)) ) (values k (gethash k h))) +iterend+))))
(defun n-at-a-time (n thing)
(let ( (iter (iterator thing)) )
(fn () (apply 'values (n-of (funcall iter) n)))))
Note that N-AT-A-TIME has actually gotten simpler since we no longer need to check for +iterend+. When the underlying iterator returns +iterend+ it just gets passed through automatically.
All this works in combination with another generic function called REF which is like ELT except that, because it's a generic function, can be extended to work on data types other than lists and vectors. The solution to the last problem is now:
(for (elt key) in thing do (setf (ref thing key) (+ elt 1)))
So we've had to do a lot of typing, but the net result is a huge win in terms of maintainability and flexibility. We can apply this code to any data type for which we can define an iterator and reference method, which means we can swap those types in and out without having to change any code. We can define meta-iterators (like N-AT-A-TIME) to modify how our iterations happen. (By default, the iterator I use for streams iterates over characters, but I have a meta-iterator called LINES that iterates over lines instead. It's easy to define additional meta-iterators that would iterate over paragraphs, or forms, or other semantically meaningful chunks.)
All this code, include the support macros (like with-gensym) is available at http://www.flownet.com/ron/lisp in the file utilities.lisp and dictionary.lisp. (There's lots of other cool stuff in there too, but that will have to wait for another day.)
UPDATE: I mentioned that iterators are not my idea, but didn't provide any references. There are plenty of them, but here are two to get you started if you want to read what people who actually know what they are talking about have to say about this:
http://okmij.org/ftp/Scheme/enumerators-callcc.html
http://okmij.org/ftp/papers/LL3-collections-enumerators.txt
Thanks to sleepingsquirrel for pointing these out.
I also realized after reading these that I probably should have used catch/throw instead of +iterend+ to end iterations.
Finally, it's worth noting that Python has iterators built-in to the language. But if it didn't you couldn't add them yourself like you can in Common Lisp. It can be done in Arc, but it's harder (and IMO less elegant) because ARC does not have generic functions, so you have to essentially re-invent that functionality yourself.
Saturday, February 02, 2008
What Python gets right
I find that the language itself actually has an awful lot to recommend it, and that there is a lot that the Lisp world could learn from Python.
A couple of people asked about that so here goes:
I like the uniformity of the type system, and particularly the fact that types are first-class data structures, and that I can extend primitive types. I used this to build an ORM for a web development system that I used to build the first revision of what eventually became Virgin Charter. The ORM was based on statically-typed extensions to the list and dict data types called listof and dictof. listof and dictof are meta-types which can be instantiated to produce statically typed lists and dicts, e.g.:
>>> listof(int)
<type 'list_of<int>'>
>>> l=listof(int)()
>>> l.append(1)
>>> l.append(1.2)
Warning: coerced 1.2 to <type 'int'>
>>> l.append("foo")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "container.py", line 126, in append
value = check_type(value, self.__element_type__)
File "container.py", line 78, in check_type
raise TypeError('%s cannot be coerced to %s' % (value, _type))
TypeError: foo cannot be coerced to <type 'int'>
>>>
I also extended the built-in int type to make range-bounded integers:
>>> l=listof(rng(1,10))()
>>> l.append(5)
Warning: coerced 5 to <type int between 1 and 10>
>>> l.append(20)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "container.py", line 126, in append
value = check_type(value, self.__element_type__)
File "container.py", line 78, in check_type
raise TypeError('%s cannot be coerced to %s' % (value, _type))
TypeError: 20 cannot be coerced to <type int between 1 and 10>
I've also got a range-bounded float type, and a length-limited string type.
All of these types have automatic translations into SQL, so to make them persistent you don't have to do any work at all. They automatically generate the right SQL tables and insert/update commands to store and retrieve themselves from a SQL database.
You can't do this in Common Lisp because built-in types are not part of the CLOS hierarchy (or, to be more specific, built-in types do not have standard-object as their meta-type).
You can actually do a surprising range of macro-like things within Python syntax. For example, my ORM has a defstruct function for defining structure types with statically-typed slots. It's syntax uses Python's keyword syntax to define slots, e.g.:
defstruct('mystruct', x=1, y=2.3, z=float)
This defines a structure type with three slots. X is constrained to be an integer with default value of 1. Y is constrained to be a float with default value of 2.3. Z is constrained to be a float, but because it has no default value it can also be None.
If you look as SQLAlchemy and Elixir they take this sort of technique to some really scary extremes.
I like Python's slice notation and negative indexing. Being able to write x[1:-1] is a lot nicer than (subseq x 1 (- (length x) 1)).
I like the fact that hash tables have a built-in syntax. I also like the fact that they are hash tables is mostly hidden behind an abstraction.
I like list comprehensions and iterators.
A lot of these things can be added to Common Lisp without too much trouble. (I actually have CL libraries for iterators and abstract associative maps.) But some of them can't, at least not easily.
Just for the record, there are a lot of things I don't like about Python, starting with the fact that it isn't Lisp. I think syntactically-significant whitespace is a *terrible* idea. (I use a programming style that inserts PASS statements wherever they are needed so that emacs auto-indents everything correctly.) I don't like the fact that importing from modules imports values rather than bindings. And I don't like the fact that I can't optionally declare types in order to make my code run faster.
What are programming languages for?
I was struggling with how to organize that followup when I serendipitously saw that Paul Graham had posted a refutation of my criticisms of Arc. I must say that, given the volume of discussion about Arc, that Paul chose my particular criticism as worthy of refutation really made my day. And reading Paul's posting, as so often happens, helped some of my own ideas gel.
The title of this post is taken from a 1989 paper by Phil Agre and David Chapman entitled What are Plans For? Unfortunately, that paper seems to have fallen into obscurity, but it had an enormous influence on me at the time. The paper questioned the then-canonical view that plans are essentially programs to be executed more or less open-loop by a not-very-interesting (from an academic point of view) execution engine. It proposed an alternate point of view that plans should be considered as generalized information resources for a complex (and therefore academically interesting) execution engine. That led to a number of researchers (myself included) more or less contemporaneously putting this idea into practice, which was widely regarded at the time as considerable progress.
The point of this story is that sometimes the best way to make progress is not to just dig in and work, but to step back and question fundamental assumptions. In this case, I think it's worthwhile asking the question: what are programming languages for? Because there are a lot of tacit (and some explicit) answers to that question that I think are actually constraining progress.
The question is not often asked because the answer at first blush seems to be obvious: programming languages are for writing programs. (Duh!) And so the obvious metric for evaluating the quality of a programming language is: how easy does it make the job of writing programs? On this view, Paul's foundational assertion seems entirely reasonable:
I used what might seem a rather mundane test: I worked on things that would make programs shorter. Why would I do that? Because making programs short is what high level languages are for. It may not be 100% accurate to say the power of a programming language is in inverse proportion to the length of programs written in it, but it's damned close.
If you accept this premise, then Paul's approach of building a language by starting with Lisp and essentially Huffman-coding it makes perfect sense. If shorter-is-better, then getting rid of the odd extraneous paren can be a big win.
The reason Paul and I disagree is not that I question his reasoning, but that I question his premise. Shorter is certainly better all else being equal, but all else is not equal. I submit that there are (or at least ought to be) far more important considerations than brevity for a programming language, especially a programming language designed, as Arc is, for exploratory programming.
One indication that Paul's premise is wrong is to push it to its logical conclusion. For concise programs it's really hard to beat APL. But the drawback to APL is so immediately evident that the mere mention of the language is usually enough to refute the extreme version of the short-is-better argument: APL programs are completely inscrutable, and hence unmaintainable. And so conciseness has to be balanced at least to some extent with legibility. Paul subscribes to Abelson and Sussman's admonition that "Programs should be written for people to read, and only incidentally for machines to execute." In fact, Paul believes this so strongly that he thinks that the code should serve as the program's specification. (Can't find the citation at the moment.)
So there's this very delicate balance to be struck between brevity and legibility, and no possible principle for how to strike it because these are incommensurate quantities. Is it really better to shrink SETF down to one character (=) and DEFINE and DEFMACRO down to 3 each (DEF and MAC) than the other way around? For that matter, why have DEF at all? Why not just use = to define everything?
There's an even more fundamental problem here, and that is that legibility is a function of a person's knowledge state. Most people find APL code inscrutable, but not APL programmers. Text from *any* language, programming or otherwise, is inscrutable until you know the language. So even if you could somehow come up with a way to measure the benefits in legibility of the costs of making a program longer, where the local maximum was would depend on who was doing the reading. Not only that, but it would change over time as the reader got more proficient. (Or less. There was a time in my life when I knew how to solve partial differential equations, but I look back at my old homework and it looks like gobbledygook. And yet it's in my handwriting. Gives you some idea of how old I am. We actually wrote things with our hands when I was in school.)
There's another problem with the shorter-is-better premise, which is that the brevity of a program is much more dependent on the available libraries than on the structure of the language. If what you want to do is part of an available library then the code you have to write can be very short indeed, even if you're writing in Cobol (which is notoriously wordy). Contrariwise, a web server in APL would probably be an awful lot of work, notwithstanding that the language is the very caricature of concision.
I submit that what you want from a programming language is not one that makes programs shorter, but one that makes programs easier to create. Note that I did not say easier to write, because writing is only one part of creating a program. In fact, it is far from clear that writing is invariably the best way to create a program. (In fact, it is not entirely clear that the whole concept of program is even a useful one but that is a topic for another day.) The other day I built a program that does some fairly sophisticated image processing, and I did it without writing even a single line of code. I did it using Quartz Composer, and if you haven't ever tried it you really should. It is quite the eye-opening experience. In ten minutes I was able to build a program that would have taken me weeks or months (possibly years) to do any other way.
Now, I am not saying that Quartz Composer is the Right Thing. I am actually not much of a fan of visual programming languages. (In fact, I am in certain circles a notorious critic of UML, which I consider one of the biggest steps backward in the history of software engineering.) I only want to suggest that the Right Thing for creating programs, whatever it turns out to be, may involve an interaction of some form other than typing text. But if you adopt shorter-is-better as your premise you completely close the door in even considering that as a possibility, because your metric is only applicable to text.
There is another fundamental reason for questioning shorter-is-better, especially for exploratory programming. Exploratory programming by definition is programming where you expect to have to change things after you have written them. Doesn't it make sense then to take that into account when choosing a quality metric for a language designed to support exploratory programming? And yet, Paul writes:
The real test of Arc—and any other general-purpose high level language—is not whether it contains feature x or solves problem y, but how long programs are in it.
Built in to this is the tacit assumption that a shorter program is inherently easier to change, I suppose because there's simply less typing involved. But this is clearly not true. Haskell is also a very concise language, but making changes to Haskell code is notoriously difficult. (For that matter, writing Haskell code to begin with is notoriously difficult.)
Cataloging all the language features that potentially make change easier would take me far afield here. My purpose here is just to point out that the source of the disagreement between me and Paul is simply the premise that shorter-is-better. Paul accepts that premise. I don't.
So what are programming languages for? They are (or IMO should be) for making the creation of programs easier. Sometimes that means making them shorter so you can do less typing, but I submit that that is a very superficial criterion, and not one that is likely by itself to serve you well in the long run. Sometimes investing a little more typing can pay dividends down the road, like making you do less typing when you change your mind and decide to use a hash table instead of an association list.
One thing that many people found unsatisfying about my how-I-lost-my-faith posting is that I never really got around to explaining why I lost my faith other than saying that I saw people being productive in other languages. Sorry to disappoint, but that was basically it. What I think needs clarification is exactly what faith I lost. I did not lose faith in Lisp in the sense that it stopped being my favorite programming language. It didn't (notwithstanding that I switched to Python for certain things -- more on that in a moment). What I lost faith in was that Lisp was the best programming language for everyone (and everything), and that the only reason that people didn't use Lisp is that they were basically ignorant. My faith was that once people discovered Lisp then they would flock to it. Some people (hi Kenny!) still believe that. I don't.
The reason I switched to Python was that, for me, given the totality of the circumstances at the time, it was (and still is, though that may be changing) easier for me to build web sites in Python than it was in Lisp. And one of the big reasons for that had nothing to do with the language per se. It had to do with this. 90% of the time when I need to do something in Python all I have to do is go to that page and in two minutes I can find that someone has already done it for me.
Now, Lispers invariably counter that Lisp has all these libraries too, and they may be right. But the overall experience of trying to access library functionality in Python versus Lisp is night and day because of Python's "batteries included" philosophy. To access library functionality in Lisp I first have to find it, which is no small task. Then I often have to choose between several competing implementations. Then I have to download it, install it, find out that it's dependent on half a dozen other libraries and find and install those, then figure out why it doesn't work with my particular implementation... it's a freakin' nightmare. With Python I just type "import ..." and it Just Works. And yes, I know that Python can do this only because it's a single-implementation language, but that's beside the point. As a user, I don't care why Python can do something, I just care that it can.
(BTW, having adopted Python, I find that the language itself actually has an awful lot to recommend it, and that there is a lot that the Lisp world could learn from Python. But that's a topic for another day.)
Let me close by reiterating that I have the highest respect for Paul. I admire what he's doing and I wish him much success (and I really mean that -- I'm not just saying it because I'm angling for an invitation to a YC lunch). But I really do think that he's squandering a tremendous opportunity to make the world a better place by basing his work on a false premise.
UPDATE: A correction and a clarification:
A lot of people have commented that making changes to Haskell code is not hard. I concede the point. I was writing in a hurry, and I should have chosen a better example (Perl regexps perhaps).
Others have pointed out that Paul's program-length metric is node count, not character count, and so APL is not a fair comparison. I have two responses to that. First, APL code is quite short even in terms of node count. Second, Paul may *say* he's only interested in node count, but the names he's chosen for things so far indicate that he's interested in parsimony at the character level as well (otherwise why not e.g. spell out the word "optional" instead of simply using the letter "o"?)
In any case, even node count is a red herring because it begs the question of where you draw the line between "language" and "library" and "program" (and, for that matter, what you consider a node). I can trivially win the Arc challenge by defining a new language (let's call it RG) which is written in Arc in the same way that Arc is written in Scheme. RG consists entirely of one macro: (mac arc-challenge-in-one-node () '([insert code for Arc challenge here])) Now the RG code for the Arc challenge consists of one node, so RG wins over Arc.
And to those who howl in protest that that is cheating I say: yes, that is precisely my point. RG is to Arc exactly what Arc is to Scheme. There's a lot of stuff behind the scenes that allows the Arc challenge code in Arc to be as short as it is, and (and this is the important point) it's all specific to the particular kind of task that the Arc challenge is. Here's a different kind of challenge to illustrate the point: write a program that takes a stream of images from a video camera, does edge-detection on those images at frame rates, and displays the results. Using Quartz Composer I was able to do that in about ten minutes with zero lines of code. By Paul's metric, that makes Quartz Composer infinitely more powerful than Arc (or any other programming language for that matter).
So the Arc challenge proves nothing, except that Arc has a wizzy library for writing certain kinds of web applications. But it's that *library* that's cool, not the language that it's written in. A proper Arc challenge would be to reproduce that library in, say, Common Lisp or Python, and compare how much effort that took.
Friday, February 01, 2008
What Arc gets right
The main thing I like about Arc is that it's a Lisp, which is to say, it uses S-expression syntax to express programs as lists. I agree with Paul that this is one of the most colossally good ideas ever to emerge from a human brain. S-expressions are the Right Thing.
The next thing I think I like about Arc (I say "I think" because this capability is only hinted at in the current documentation) is the way it tries to integrate into the web, and in particular, the way it seems to fold control-flow for web pages into the control flow for the language (like this for example). The seems to have grown out of Paul's viaweb days when he first used the idea of storing web continuations as closures. Integrating that idea into the language and having the compiler automagically generate and manage those closures behind the scenes could be a really big win.
I like the general idea of getting rid of unnecessary parens, though I think this is not as big a deal as Paul seems to think it is.
I like the general idea of making programs concise, though I think this is fraught with peril (APL anyone?) and Arc could do a better job in this regard than it does. (More on that in a later post.)
I think the kerfuffle about Arc's lack of support for unicode is much ado about nothing. That's obviously a first-draft issue, and easy enough to fix.
Finally, I like that Arc is generating a lot of buzz. Getting people interested in Lisp -- any Lisp -- is a Good Thing.
Thursday, January 31, 2008
My take on Arc
Some quick background: I am as big a fan of Lisp as you could ever hope to find. I've been using Lisp since 1979 (my first Lisp was P-Lisp on an Apple ][ ) and I used it almost exclusively for over twenty years until I lost my faith and switched, reluctantly, to Python (and I was not alone). Recently I have taken up Lisp again since the release of Clozure Common Lisp. I am proud of the fact that my login on Reddit, YC News and Arclanguage.org is Lisper.
Furthermore, I am a huge Paul Graham fan. I think his essays are brilliant. I think Y Combinator is brilliant (to the point where I'm seriously considering moving from LA to the Silicon Valley just so I can go hang out there). Paul is the kind of guy I wish I could be but can't.
And to round out the preliminaries and disclaimers, I am mindful of the fact that the current release of Arc is a first draft, and it's never possible to live up to the hype.
I think all the enthusiasm and buzz about Arc is wonderful, but I am concerned what will happen if people start to think that there's no there there. If Arc doesn't save Lisp it's hard to imagine what would. And unfortunately, I think Arc has some quite serious problems.
The biggest problem with Arc is that it is at the moment not much more than a (very) thin layer on top of Scheme. Now, that would be OK if it were the right thin layer, but I don't think it is. Arc's design path is well-trod, and the pitfalls that lie upon it are mostly well known.
For example, Arc is 1) a Lisp-1 that 2) uses unhygienic macros and 3) does not have a module system. This is bound to lead to problems when programs get big, and not because people forget to put gensyms (which Arc dubs "uniqs") in the right place (although I predict that will be a problem too). The problem is that in a Lisp-1, local variable bindings can shadow global function names, and so if you use a macro M that references a global function F in a context where F is shadowed then M will fail. If you're lucky you'll get an error. If you're not lucky your program will just do some random weird thing. Hygienic macros were not invented just because some intellectuals in an ivory tower wanted to engage in some mathematical masturbation. This is a real problem, and the larger your code base the more real it becomes.
I cite this problem first because macros are, according to Paul Graham, the raison d'etre for Lisp. Macros are the reason for putting up with all those irritating parentheses. And macros in Arc are broken.
Unfortunately, it gets worse.
Arc is supposed to be a language for "exploratory programming" so it's supposed to save you from premature commitments. From the Arc tutorial:
Lists are useful in exploratory programming because they're so flexible. You don't have to commit in advance to exactly what a list represents. For example, you can use a list of two numbers to represent a point on a plane. Some would think it more proper to define a point object with two fields, x and y. But if you use lists to represent points, then when you expand your program to deal with n dimensions, all you have to do is make the new code default to zero for missing coordinates, and any remaining planar code will continue to work.
There are a number of problems with this. First, the kind of flexibility that Paul describes here is not unique to lists. You could accomplish the exact same thing with, for example, a Python object with named slots (which is just a thin wrapper for an abstract associative map -- note that I did not say hash table here. More on this later.) You could have 2-D points with X and Y slots, and 3-D points with X, Y and Z slots. You could even do the 2-D points-return-zero-for-their-nonexistent-Z-slot trick by redefining the __getattr__ method for the 2D point class. Python objects are every bit as flexible as lists except that it's a lot easier to figure out that a <2-D point instance> is a two-dimensional point than a list with two elements. (You can't even assume that a 2-D point will be a list of two numbers, because Paul goes on to suggest:
Or if you decide to expand in another direction and allow partially evaluated points, you can start using symbols representing variables as components of points, and once again, all the existing code will continue to work.
"All of your existing code will continue to work" only if your existing code isn't built with the tacit assumption that the coordinates of a point are numbers. If you've tried to do math on those coordinates then you're out of luck.
Which brings me to my next point...
Lisp lists are quite flexible, but they are not infinitely malleable. They are a "leaky abstraction." The fact that Lisp lists are linked lists (and not, for example, vectors, as Python lists are) is famously exposed by the fact that CDR is a primitive (and non-consing) operation. Make no mistake, linked lists are monstrously useful, but there are some things for which they are not well suited. In particular, Nth is an O(n) operation on linked lists, which means that if you want to do anything that involves random access to the elements of a list then your code will be slow. Paul recognizes this, and provides hash tables as a primitive data structure in Arc (the lack of which has been a notable shortfall of Scheme). But then he backpedals and advocates association lists as well:
This is called an association list, or alist for short. I once thought alists were just a hack, but there are many things you can do with them that you can't do with hash tables, including sort them, build them up incrementally in recursive functions, have several that share the same tail, and preserve old values.
First, hash tables can be sorted, at least in the sense that associated lists can be sorted. Just get a list of the keys and sort them. Or create a sorted-hash-table that maintains an adjunct sorted list of keys. This is not rocket science. But that is not the problem.
The problem is that the functions for accessing association lists are different from those used to access hash tables. That means that if you write code using one you cannot pass in the other, which completely undermines the whole idea of using Arc as an exploratory language. Arc forces you into a premature optimization here.
The Right Thing if you want to support exploratory programming (which to me means not force programmers to make premature commitments and optimizations) is to provide an abstract associative map whose underlying implementation can be changed. To make this work you have to commit to a protocol for associative maps that an implementation must adhere to. The trouble is that designing such a protocol is not such an easy thing to do. It involves compromises. For example, what should the implementation do if an attempt is made to dereference a non-existent key? Throw an exception? Return some unique canonical value? Return a user-supplied default? Each possibility has advantages and disadvantages. The heavy lifting in language design is making these kinds of choices despite the fact that there is no One Right Answer (or even if there is, that it may not be readily apparent).
And that is my main gripe about Arc: it has been so long in the making and set such lofty goals and then it seems to pretty much punt on all the hard problems of language design.
Now, as I said, I am mindful of the fact that this is just a first draft. But some Big Decisions do seem to have been made. In particular, it seems a safe bet that Arc will not have an OO layer, which means no generic functions, no abstract data types, and hence no way to build reliable protocols of the sort that would be needed to eliminate the kinds of forced premature optimizations that Arc currently embraces. It also seems a safe bet that it will remain a Lisp-1 without hygienic macros (because Paul seems to regard both hygiene and multiple name spaces as Horrible Hacks). Whether it gets a module system remains to be seen, but it seems doubtful. If you think designing a protocol for abstract associative maps is hard, you ain't seen nothin' yet.
So there it is. Paul, if you're reading this, I'm sorry to harsh on your baby. I really hope Arc succeeds. But I gotta call 'em as I see 'em.
Thursday, January 24, 2008
What the fuck is wrong with this country?
A mother whose two teenage daughters were placed in an orphanage when she fell ill during a post-Christmas shopping trip to New York has been told she is under investigation because her children were taken into care.
Yvonne Bray, took her daughters Gemma, 15, and Katie, 13, to New York shortly after Christmas for a shopping trip but was taken into hospital when she fell ill with pneumonia during their visit.
The girls were then told they could not wait at the hospital and as minors would have to be taken into care.
Social workers took them to a municipal orphanage in downtown Manhattan, where they were separated, strip-searched and questioned before being kept under lock and key for the next 30 hours.
The two sisters were made to shower in front of security staff and told to fill out a two-page form with questions including: "Have you ever been the victim of rape?" and "Do you have homicidal tendencies?"
One question asked "are you in a street gang?" to which Gemma replied: "I'm a member of Appledore library."
Their clothes, money and belongings were taken and they were issued with regulation white T-shirt and jeans. Katie said: "It was like being in a little cage. I tried to go to sleep, but every time I opened my eyes, someone was looking right at me."
Eventually Bray discharged herself, and - still dressed in hospital pyjamas - tracked down the girls.
She said: "It is absolutely horrendous that two young girls were put through an ordeal like that. They were made to answer traumatic questions about things they don't really understand and spend over 24 hours under surveillance."
Since returning home, Bray has received a letter from the US Administration for Children and Families, notifying her that, because the children were admitted to the orphanage, she is now "under investigation."
I've copied the entire article in blatant disregard of fair-use doctrine because this is one story that really ought not be lost to a stale link.
As far as I can tell, not a single US media outlet has picked up the story. Not that I'm surprised by that.
Monday, January 14, 2008
Ford's new strategy: treat your customers like shit
...a law firm representing Ford contacted [the Black Mustang Club] saying that our calendar pics (and our club's event logos - anything with one of our cars in it) infringes on Ford's trademarks which include the use of images of THEIR vehicles. Also, Ford claims that all the images, logos and designs OUR graphics team made for the BMC events using Danni are theirs as well.
<irony>
Seems like an effective business strategy to me. Sure makes me want to run right out and buy a Mustang.
</irony>
John McCain was right. Some of those jobs aren't coming back to Detroit.
A Really Bad Idea (tm)
"In Sony's vision of the future, any two consumer devices will be able to exchange data wirelessly with one another simply by holding them close together. The system is designed for maximum ease of use, which means limited options for controlling the transfers; devices will transfer their contents automatically to another device within range."
Someone at Sony didn't think this through.
Friday, January 11, 2008
Nice work if you can get it
Of course, it's really the Countrywide board of directors that is to blame here. I could have bankrupted Countrywide just as well as Mozilo did (if not better), and I would have happily done it for a mere $50 million.
Let me see your papers redux
Monday, January 07, 2008
Good news, bad news...
Bad news is it's marijuana.
Wednesday, January 02, 2008
If the swastika fits...
John Deady, the co-chair of New Hampshire Veterans for Rudy [Giuliani], is standing by the comments he made in the controversial interview with The Guardian we posted on below, in which he said that "the Muslims" need to be chased "back to their caves."
Here's the scary bit:
When I asked Deady to elaborate on his suggestion that we need to "get rid" of Muslims, Deady said:
"When I say get rid of them, I wasn't necessarily referring to genocide....
Wasn't necessarily referring to genocide? So he might have been referring to genocide?
Even the German Nazis (it's a sad commentary on the state of the world that I need to qualify the term now) were more discrete about their plans to murder all the Jews than this bozo.
Of course, the really scary thing is not John Deady, it's that he can say things like this and not get run out of town on a rail. Does no one remember that we fought a war not too long ago against people like this?