Wednesday, February 27, 2008

I suppose they think setting the building on fire for an evacuation drill would be a good idea too

Idiocy really knows no bounds these days.


An armed man who burst into a classroom at Elizabeth City State University was role-playing in an emergency response drill, but neither the students nor assistant professor Jingbin Wang knew that.

The Friday drill, in which a mock gunman threatened panicked students in the American foreign policy class with death, prompted university officials to apologize this week to Wang and offer counseling to faculty and students.


I suspect they're going to be offering a lot more than counseling before this is through.

Complete story [here].

Friday, February 22, 2008

The meaning of innovation

From the wish-I'd-thought-of-that-first department comes the coolest thing I have seen in a long, long time. It's a 3-D display made from a Nintendo Wii and a very clever software hack. Johnny Lee, I doff my hat.

Oh, and this is pretty damn clever too. Things like this give me real hope for the future.

Thursday, February 14, 2008

The litigant that would not die

Like a zombie from a bad horror movie that just refuses to stay dead, the SCO group has risen from the ashes thanks to a $100M infusion from Stephen Norris Capital Group. (If anyone bothers to follow the money I'd be surprised if it doesn't lead back to Redmond.)

I don't normally wish my fellow investors ill, but in this case I'll make an exception and say that whoever put up the cash to bring SCO back from the dead deserves to lose every penny. SCO is a parasite, and it should be expunged form the world along with tapeworms, mosquitoes, and malaria protozoa.

CLS

It's quite the fashion nowadays to ascribe all manner of bad behavior to some kind of mental illness or other. In that spirit I propose that Cowardly Lion Syndrome (CLS) needs to be added to the lexicon.

CLS derives its name from the scene in the classic movie The Wizard of Oz where we are introduced to the Cowardly Lion. Not having the courage to stand up to Dorothy, Scarecrow and the Tin Man, the Cowardly Lion decides to prove his manliness (his Lionliness?) by chasing after Dorothy's little dog Toto instead.

CLS is rampant, particularly in American politics. The most recent example is Congress, not having the courage to stand up to the Bush Administration or the Telcos, decide to chase after Roger Clemens instead. Before that it was Rush Limbaugh and Terri Schiavo and gay people and cancer patients.

Unfortunately, there is not yet a drug approved by the FDA to treat Cowardly Lion Syndrome. I wonder if Congress could work up the courage to fund a study.

Saturday, February 09, 2008

As ye sow...

There is no doubt in my mind that some day Americans will look back on this time with the same sort of shame that we now look back on the McCarthy era. (In fact, the war on terror seems to me to be a bad remake of McCarthyism, with "terrorism" substituted for "communist", Iraq and Iran for Viet Nam and Russia, and (thank God) Keith Olberman playing Edward R. Murrow.)

On that theory, we should be having a Joseph Welch moment one of these days. Maybe this is it.

Thursday, February 07, 2008

What's the right quality metric?

Since I'm spending so much time beating on Paul Graham I thought I should balance that by pointing out what I think Paul is doing right, particularly in light of this recent posting:


"The reason I'm focusing on the region between axioms and libraries is that, from the programmer's point of view, these operators are the language. These are what your programs are made of. If Lisp were a house, these operators would be the front door, the living room sofa, the kitchen table. Small variations in their design can greatly affect how well the language works.

I've taken a lot of heat for focusing on this"


For the record, I think focusing on this is absolutely the right thing to do, and I applaud Paul for standing up to criticism and doing it. The only thing I disagree with him on is this:


By definition, there is some optimal path from axioms up to a complete language.


This is true only with respect to a particular quality metric, and I think that the quality metric Paul has chosen is brkn.

What do I suggest instead? Glad you asked.

First, let's go back and revisit briefly the question of what programming languages are for. Paul takes as an axiom that programming languages are for making programs shorter but I say they are for making programs easier to create and maintain, which may or may not be the same thing as making them shorter. Certainly, all else being equal, shorter is better, but all else is rarely equal. Sometimes adding length buys you something, and you have to make the tough call about whether the benefit is worth the cost.

It is ironic that Paul would choose such a simplistic metric after going to such great lengths to make the point that programming is like painting. I think this is a great insight, but one of the conclusions is that assessing the quality of a program, just like assessing the quality of a painting, is not such an easy thing to do. Paul's quality metric for programs is analogous to assessing the quality of a painting by counting the number of brushstrokes, and just as absurd. It is a genuine puzzle that Paul of all people needs to be convinced of this.

Programming is all about translating thoughts in people's brains into bits. There is a huge impedance mismatch between brains and bits, and programming languages help to bridge the gap. But there is no more a one-size-fits-all quality metric for programming languages than there is for paintings, or anything else that involves thoughts. There will always be different strokes for different folks.

That is not to say that we need to descend into the black hole of postmodern relativism and say that it's all just a matter of opinion and interpretation. I think there is a criterion that can be used to effectively compare one language against another, but it is very tricky to state. Here's the intuition:

If language A can do everything that language B can do, and language A can do something than language B cannot do, then language A is superior to language B.

(Note that this criterion can produce only a partial-ordering of language quality, but I think that is both inevitable and OK.)

The problem with this as stated is that all languages are Turing-equivalent, so from a strictly mathematical point of view all programming languages are the same. What I mean by "cannot do" is what cannot be done without transcending the framework of the language. Here are some examples:

1. C cannot throw exceptions.

2. Common Lisp cannot support Arc's composition operator (both because the interpretation of an unescaped colon within a symbol name is hard-coded in the standard to have a particular meaning, and because the semantics of function calls are defined within the standard in such a way that calling a composed function would be a syntax error).

3. Arc cannot (as it currently stands) dereference an array in constant time.

4. You can't add new control constructs to Python.

The obvious drawback to my criterion as stated is that it prefers large, unwieldly, kitchen-sink languages to small, elegant languages. So I'll adopt a version of Paul's minimalist metric as a secondary criterion:

If two languages support essentially the same feature set, but language A is in some sense "smaller" than language B, then language A is superior to language B.

In other words, parsimony is desirable, but only secondarily to features (or, more precisely, a lack of constraints). On this view, it is clear why macros are such a big win: macros are a sort of "meta-feature" that let you add new features, and so any language with macros has a huge leg up (in terms of my quality metric) over any language that doesn't.

So why am I so reticent about Arc? After all, Arc is pretty small, but it has macros, so it should be able to subsume other languages. And because it will subsume other languages in a minimalist way it will be the best language. Right?

Well, maybe. There are a couple of problems.

First, not all macro systems are created equal, and Arc's macro system has some known problems. How serious those problems are in practice is, perhaps, an open issue, but my quality metric doesn't take such things into account. A language that solves a problem is in my view better than a language that leaves it up to the user to solve that problem, even if that problem doesn't arise very often. (I dislike Python's syntactically-significant whitespace for the same reason, and despite the fact that it mostly works in practice.)

Second, there is more than one kind of macro. Common Lisp, for example, has symbol macros, which turn out to be tremendously useful for implementing things like module systems, and reader macros which let you change the lexical surface syntax of the language if you want to, and compiler macros which let you give the compiler hints about how to make your code more efficient without having to change the code itself.

Finally, there are language features like type systems which, I am told by people who claim to understand them (I'm not one of them), are quite spiffy and let you do all manner of cool things.

That's the real challenge of designing the 100-year language. In order to make sure that you're not designing yet another Blub you may need to be able to support features that you don't actually understand, or maybe have never even heard of (or maybe haven't even been invented yet). To think otherwise is, IMO, to deny the vast richness of programming, indeed of mathematics itself.

This is totally unacceptable



The New York Times reports:


"An article about the Prophet Muhammad in the English-language Wikipedia has become the subject of an online protest in the last few weeks because of its representations of Muhammad, taken from medieval manuscripts.

In addition to numerous e-mail messages sent to Wikipedia.org, an online petition cites a prohibition in Islam on images of people.

The petition has more than 80,000 “signatures,” though many who submitted them to ThePetitionSite.com, remained anonymous.

“We have been noticing a lot more similar sounding, similar looking e-mails beginning mid-January,” said Jay Walsh, a spokesman for the Wikimedia Foundation in San Francisco, which administers the various online encyclopedias in more than 250 languages.

A Frequently Asked Questions page explains the site’s polite but firm refusal to remove the images: “Since Wikipedia is an encyclopedia with the goal of representing all topics from a neutral point of view, Wikipedia is not censored for the benefit of any particular group.”

The notes left on the petition site come from all over the world. “It’s totally unacceptable to print the Prophet’s picture,” Saadia Bukhari from Pakistan wrote in a message. “It shows insensitivity towards Muslim feelings and should be removed immediately.”"


I applaud Wikipedia for taking a firm stand on this. In solidarity with them I am posting one of the images in question hosted on my own server. It is totally unacceptable to ask for this image to be removed. It shows insensitivity towards the feelings of tolerant and freedom-loving people throughout the world (to say nothing of bad theology in light of the fact that the image in question was painted by a Muslim!) The demand that this image be removed should be withdrawn immediately.

Z SHRTR BTTR?

People seem to be falling all over themselves to take the Arc challenge.

Central to this flurry of activity is Paul Graham's quality metric, that shorter (in terms of node count) is better. In fact, Paul explicitly says "making programs short is what high level languages are for."

I have already posted an argument against this position, citing APL as an example of a very concise language that is nonetheless not widely considered to be the be-all-and-end-all of high-level languages (except, perhaps, among its adherents). Here I want to examine Paul's assumption on his own terms, and see what happens if you really take his quality metric seriously on its own terms.

Paul defines his quality metric at the end of this essay:

The most meaningful test of the length of a program is not lines or characters but the size of the codetree-- the tree you'd need to represent the source. The Arc example has a codetree of 23 nodes: 15 leaves/tokens + 8 interior nodes.

He is careful to use nodes rather than characters or lines of code in order to exclude obvious absurdities like single-character variable and function names (despite the fact that he has a clear tendency towards parsimony even in this regard). So let's take a look at Arc and see if we can improve it.

One of the distinguishing features of Arc compared to other Lisps is that it has two different binding forms that do the same thing, one for the single-variable case (LET) and another for the multiple-variable case (WITH). This was done because Paul examined a large corpus of Lisp source code and found that the traditional Lisp LET was used most of the time to bind only a single variable, and so it was worthwhile making this a special case.

But it turns out that we can achieve the exact same savings WITHOUT making the single-variable case special. Let's take a traditional Lisp LET form:

(let ((var1 expr1) (var2 expr2) ...) form1 form2 ... formN)

and just eliminate the parentheses:

(let var1 expr1 var2 expr2 ... form1 form2 ... formN)

At first glance this doesn't work because we have no way of knowing where the var-expr pairs stop and the body forms start. But actually this is not true. We CAN parse this if we observe that If there is more than one body form, then the first body form MUST be a combination (i.e. not an atom) in order to be semantically meaningful. So we can tell where the var/expr pairs stop and the body forms begin by looking for the first form in a varN position that is either 1) not an atom or 2) not followed by another form. If you format it properly it doesn't even look half bad:


(let var1 expr1
var2 expr2
...
(form1 ...)
(form2 ...)
)

OR

(let var1 expr1
var2 expr2
...
result)


In order to avoid confusion, I'm going to rename this construct BINDING-BLOCK (BB for short). BB is, by Paul's metric, an unambiguous improvement in the design of ARC because it's the same length (in terms of nodes) when used in the single-variable case, and it's a node shorter when used in the multiple-variable case.

In fact, we can do even better by observing that any symbol in the body of a binding block that is not at the end can be unambiguously considered to be a binding because otherwise it would not be semantically meaningful. So we can simplify the following case:


(bb var1 expr1
(do-some-computation)
(bb var2 expr2
(do-some-more-computation)
....


to:


(bb var1 expr1
(do-some-computation)
var2 expr2
(do-some-more-computation)
....


Again, this is an unambiguous win because every time we avoid a nested binding block we save two nodes.

But notice that something is happening that ought to make us feel just a teensy bit queasy about all these supposed "improvements": our code is starting to look not so much like Lisp. It's getting harder to parse, both for machines and humans. Logical sub-blocks are no longer delimited by parentheses, so it's harder to pick out where they start and end.

Still, it's not completely untenable to argue that binding blocks really are an improvement. (For one thing, they will probably appeal to parenthophobes.) I personally have mixed feelings about them. I kind of like how they can keep the code from creeping off to the right side of the screen, but I also like being able to e.g. select a sub-expression by double-clicking on a close-paren.

Can we do even better? It would seem that with binding-block we have squeezed every possible extraneous node out of the code. There simply are no more parentheses we can get rid of without making the code ambiguous. Well, it turns out that's not true. There is something else we can get rid of to make the code even shorter. You might want to see if you can figure out what it is before reading on.

Here's a hint: Arc already includes a construct [...] which is short for (fn (_) ...).

Why stop at one implied variable? Why not have [...] be short for (fn ((o _) (o _1) (o _2) ...) ...)? Again, this is an unambiguous improvement by Paul's quality metric because it allows us to replace ANY anonymous function with a shorter version using implied variable names, not just anonymous functions with one variable.

And now we should be feeling very queasy indeed. On Paul's quality metric, all those variable names are expensive (every one requires a node) and yet I hope I don't have to convince you that getting rid of them is not an unalloyed good. If you doubt this, consider that we can use the same argument convention for non-anonymous functions as well, and replace the lambda-list with a single number indicating the number of arguments:

(def foo 3 (do-some-computation-on _1 _2 _3))

Again, an unambiguous improvement by Paul's metric.

We can extend this to binding blocks as well, where we replace variable names with numbers as well. In fact, we don't even need to be explicit about the numbers! We can just implicitly bind the result of every form in a binding block to an implicitly named variable! Again, on Paul's quality metric this would be an unambiguous improvement in the language design.

All this is not as absurd as it might appear. If we push this line of reasoning to its logical conclusion we would end up with a language that very closely resembled Forth. Forth is a fine language. It has its fans. It's particularly good at producing very small code for use on very small embedded processors, so if you need to program an 8-bit microcontroller with 4k of RAM it's not a bad choice at all. But God help you if you need to debug a Forth program. If you think Perl is write-only, you ain't seen nothin' yet.

The flaw in the reasoning is obvious: adding nodes to program source has a cost to be sure, but it can also provide benefits in terms of readability, flexibility, maintainability, editability, reusability... and there's no way to make those tradeoffs unambiguously because they are incommensurate quantities. That is why language design is hard.

Now, I don't really mean to rag on Arc. I only want to point out that if Arc succeeds (and I think it could, and I hope it does) it won't be because it made programs as short as they can possibly be. It will be because it struck some balance between brevity and other factors that appeals to a big enough audience to reach critical mass. And I think Paul is smart enough and charismatic enough to gather that audience despite the fact that there might be a discrepancy between Paul's rhetoric and Arc's reality. But IMO the world would be a better place (and Arc would be a better language) if the complex tradeoffs of language design were taken more seriously.

[UPDATE:]

Sami Samhuri asks:

How do you account for the following case?


(let i 0
(x y) (list 1 2)
(prn "i is " i)
(prn "x is " x)
(prn "y is " y))


Well, the easiest way is (bb i 0 x 1 y 2 ...

;-)

Or, less glibly: (bb i 0 L (something-that-returns-a-list) x (1st L) y (2nd L) ...

Of course, if you want to support destructuring-bind directly you need some kind of marker to distinguish trees of variables from forms. In this case, Arc's WITH syntax, which essentially uses a set of parens to be that marker, becomes tenable again (though it doesn't actually support destructuring even though the syntax could easily be extended to support it). Another possibility is to overload the square-bracket syntax, since that also would be a no-op in a binding position. Now things start to get even more complicated to parse, but hey, shorter is better, right?

But what happens if some day you decide that multiple values would be a spiffy feature to add to the language? Now you're back to needing a marker.

Personally, in my BB macro (in Common Lisp) use the :mv and :db keywords as markers for multiple-value-bind and destructuring-bind. So your example would be:


(bb i 0
:db (x y) (list 1 2)
; and just for good measure:
:mv (q s) (round x y)
...


I also use a prefix "$" character to designate dynamic bindings, which tends to send CL purists into conniptions. :-)

And to top it off, I use the :with keyword to fold in (with-X ...) forms as well. The result is some very un-Lispy-looking Lisp, but which shorter-is-better advocates ought to find appealing.

Here's the code for BB in case you're interested:


;;; Binding Block
(defmacro bb (&rest body)
(cond
((null (rst body)) (fst body))
((consp (1st body))
`(progn ,(1st body) (bb ,@(rst body))))
((not (symbolp (1st body)))
(error "~S is not a valid variable name" (1st body)))
((eq (1st body) ':mv)
(if (symbolp (2nd body))
`(let ((,(2nd body) (multiple-value-list ,(3rd body))))
(bb ,@(rrrst body)))
`(multiple-value-bind ,(2nd body) ,(3rd body)
(bb ,@(rrrst body)))))
((eq (1st body) :db)
`(destructuring-bind ,(2nd body) ,(3rd body)
(declare (special ,@(find-specials (2nd body))))
(bb ,@(rrrst body))))
((eq (1st body) :with)
`(,(concatenate-symbol 'with- (2nd body)) ,(3rd body) (bb ,@(rrrst body))))
((keywordp (1st body))
(error "~S is not a valid binding keyword" (1st body)))
(t `(let ((,(1st body) ,(2nd body)))
(declare (special ,@(find-specials (1st body))))
(bb ,@(rrst body))))))


Note that it would be straightforward to extend this to allow users to define their own binding keywords, which would really help things to spin wildly out of control :-)

Sunday, February 03, 2008

The joy of iterators

Since I've been taking so many pot-shots at Arc I thought I'd post one of my own ideas so that people can shoot at me for a change. (Well, it's not really my idea, but it's an idea that I advocate.) I'll motivate this with a multi-step puzzle: first, write a function that can accept either a (linked) list or a vector an print out its elements. Here's one solution in Common Lisp:


(defun print-elements (thing)
(etypecase thing
(list (loop for element in thing do (print element)))
(vector (loop for element across thing do (print element))))


This is not particularly elegant. The amount of repetition between the two clauses in the etypecase is pretty annoying. We need to copy almost the entire LOOP clause just so we can change "in" to "across". How can we make this prettier?

The usual fix for repetitive code is to use a macro, e.g.:


(defmacro my-loop1 (var keyword thing &body body)
`(loop for ,var ,keyword ,thing do ,@body))


But this hardly helps at all (and arguably makes the situation worse):


(defun print-elements (thing)
(etypecase thing
(list (my-loop1 element in thing (print element)))
(vector (my-loop1 element across thing (print element))))


We could do this:


(defmacro my-loop2 (keyword)
`(loop for element ,keyword thing do (print element)))

(defun print-elements (thing)
(etypecase thing
(list (my-loop2 in)
(vector (my-loop2 across))))


But that's not a very good solution either. We're not likely to get much re-use out of my-loop2, the code overall is not much shorter, and it feels horribly contrived.

Here's another attempt, taking advantage of the fact that Common Lisp has a few functions built-in that are generic to sequences (which include lists and vectors):


(defun print-elements (thing)
(loop for i from 0 below (length thing) do (print (elt thing i))))


That looks better, but it's O(n^2) when applied to a list, which is not so good. And in any case it completely falls apart when we suddenly get a new requirement:

Extend print-elements-in-order so that it can accept a hash table, and prints out the hash-table's keyword-value pairs.

Now the first solution is suddenly looking pretty good, because it's the easiest to extend to deal with this new requirement. We just have to add another line to the etypecase:


(defun print-elements (thing)
(etypecase thing
(list (loop for element in thing do (print element)))
(vector (loop for element across thing do (print element)))
(hash-table (loop for key being the hash-keys of thing do (print (list key (gethash key thing)))))))


Now we get yet another requirement:

Extend print-elements so that it can accept a file input stream, and print out the lines of the file.

I'll leave that one as an exercise.

Now, the situation seems more or less under control here, with one minor problem: every time we get a new requirement of this sort we need to change the print-elements function. That's not necessarily a problem, except that if we have more than one programmer working on implementing these requirements we have to be sure that they don't stomp on each other as they edit print-elements (but modern revision-control systems should be able to handle that).

But now we get this requirement:

Extend print-elements so that it takes as an optional argument an integer N and prints the elements N at a time.

Now all our hard work completely falls apart, because the assumption that the elements are to be printed one at a time is woven deeply into the fabric of our code.

Wouldn't it be nice if the LOOP macro just automatically did the Right Thing for us, so that we could just write, e.g.:


(defun print-elements (thing &optional (n 1))
(loop for elements in (n-at-a-time thing n) do (print elements)))


and be done with it? Can we make that happen? Yes, we can. Here's how:


(defconstant +iterend+ (make-symbol "ITERATION_END"))

(defmacro for (var in thing &body body)
(with-gensym itervar
`(let ( (,itervar (iterator ,thing)) )
(loop for ,var = (funcall ,itervar)
until (eq ,var +iterend+)
,@body))))

(defmethod iterator ((l list))
(fn () (if l (pop l) +iterend+)))

(defmethod iterator ((v vector))
(let ( (len (length v)) (cnt 0) )
(fn () (if (< cnt len)
(prog1 (elt v cnt) (incf cnt))
+iterend+))))

(defun print-elements (thing)
(for element in thing do (print element)))


The overall code is much longer than our original solution, but I claim that it is nonetheless a huge win. Why? Because it is so much easier to extend. In order to make print-elements work on new data types all we have to do is define an iterator method on that data type which returns a closure that conforms to the iteration protocol that we have (implicitly for now) defined. So, for example, to handle streams all we have to do is:


(define-method (iterator (s stream)) (fn () (read-char s nil +iterend+)))


We don't have to change any existing code. And in particular, we don't need to redefine print-elements.

Furthermore, we can now do this neat trick:


(defmacro n-of (form n)
`(loop for #.(gensym "I") from 1 to ,n collect ,form))

(defun n-at-a-time (n thing)
(let ( (iter (iterator thing)) )
(fn () (let ((l (n-of (funcall iter) n)))
(if (eq (car l) +iterend+) +iterend+ l)))))


It actually works:

? (for l in (n-at-a-time 2 '(1 2 3 4)) do (print l))

(1 2)
(3 4)
NIL
? (for l in (n-at-a-time 2 #(1 2 3 4)) do (print l))

(1 2)
(3 4)
NIL

Furthermore, n-at-a-time works on ANY data type that has an iterator method defined for it.

There's a fly in the ointment though. Consider this requirement:


Write a function that takes either a list or a vector of integers and destructively increments all of its elements by 1.


We can't do that using the iteration protocol as it currently stands, because it only gives us access to values and not locations. Can we extend the protocol so that we get both? Of course we can, but it's a tricky design issue because there are a couple of different ways to do it. The way I chose was to take advantage of Common Lisp's ability to let functions return multiple values. I extended the FOR macro to accept multiple variables corresponding to those multiple values. By convention, iterators return two values (except for the N-AT-A-TIME meta-iterator): the value, and a key that refers back to the container being iterated over (assuming that makes sense -- some data types, like streams, don't have such keys). This makes the code for the FOR macro and the associated ITERATOR methods quite a bit more complicated:


(defmacro for (var in thing &body body)
(unless (sym= in :in) (warn "expected keyword 'in', got ~A instead" in))
(with-gensym itervar
`(let ( (,itervar (iterator ,thing)) )
,(if (consp var)
`(loop for ,var = (multiple-value-list (funcall ,itervar))
until (eq ,(fst var) +iterend+)
,@body)
`(loop for ,var = (funcall ,itervar)
until (eq ,var +iterend+)
,@body)))))

(define-method (iterator (v vector))
(let ( (len (length v)) (cnt 0) )
(fn () (if (< cnt len)
(multiple-value-prog1 (values (elt v cnt) cnt) (incf cnt))
+iterend+))))

(define-method (iterator (h hash-table))
(let ( (keys (loop for x being the hash-keys of h collect x)) )
(fn ()
(if keys (let ( (k (pop keys)) ) (values k (gethash k h))) +iterend+))))

(defun n-at-a-time (n thing)
(let ( (iter (iterator thing)) )
(fn () (apply 'values (n-of (funcall iter) n)))))


Note that N-AT-A-TIME has actually gotten simpler since we no longer need to check for +iterend+. When the underlying iterator returns +iterend+ it just gets passed through automatically.

All this works in combination with another generic function called REF which is like ELT except that, because it's a generic function, can be extended to work on data types other than lists and vectors. The solution to the last problem is now:


(for (elt key) in thing do (setf (ref thing key) (+ elt 1)))


So we've had to do a lot of typing, but the net result is a huge win in terms of maintainability and flexibility. We can apply this code to any data type for which we can define an iterator and reference method, which means we can swap those types in and out without having to change any code. We can define meta-iterators (like N-AT-A-TIME) to modify how our iterations happen. (By default, the iterator I use for streams iterates over characters, but I have a meta-iterator called LINES that iterates over lines instead. It's easy to define additional meta-iterators that would iterate over paragraphs, or forms, or other semantically meaningful chunks.)

All this code, include the support macros (like with-gensym) is available at http://www.flownet.com/ron/lisp in the file utilities.lisp and dictionary.lisp. (There's lots of other cool stuff in there too, but that will have to wait for another day.)

UPDATE: I mentioned that iterators are not my idea, but didn't provide any references. There are plenty of them, but here are two to get you started if you want to read what people who actually know what they are talking about have to say about this:

http://okmij.org/ftp/Scheme/enumerators-callcc.html

http://okmij.org/ftp/papers/LL3-collections-enumerators.txt

Thanks to sleepingsquirrel for pointing these out.

I also realized after reading these that I probably should have used catch/throw instead of +iterend+ to end iterations.

Finally, it's worth noting that Python has iterators built-in to the language. But if it didn't you couldn't add them yourself like you can in Common Lisp. It can be done in Arc, but it's harder (and IMO less elegant) because ARC does not have generic functions, so you have to essentially re-invent that functionality yourself.

Saturday, February 02, 2008

What Python gets right

In my last post I wrote:

I find that the language itself actually has an awful lot to recommend it, and that there is a lot that the Lisp world could learn from Python.

A couple of people asked about that so here goes:

I like the uniformity of the type system, and particularly the fact that types are first-class data structures, and that I can extend primitive types. I used this to build an ORM for a web development system that I used to build the first revision of what eventually became Virgin Charter. The ORM was based on statically-typed extensions to the list and dict data types called listof and dictof. listof and dictof are meta-types which can be instantiated to produce statically typed lists and dicts, e.g.:

>>> listof(int)
<type 'list_of<int>'>
>>> l=listof(int)()
>>> l.append(1)
>>> l.append(1.2)
Warning: coerced 1.2 to <type 'int'>
>>> l.append("foo")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "container.py", line 126, in append
value = check_type(value, self.__element_type__)
File "container.py", line 78, in check_type
raise TypeError('%s cannot be coerced to %s' % (value, _type))
TypeError: foo cannot be coerced to <type 'int'>
>>>

I also extended the built-in int type to make range-bounded integers:

>>> l=listof(rng(1,10))()
>>> l.append(5)
Warning: coerced 5 to <type int between 1 and 10>
>>> l.append(20)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "container.py", line 126, in append
value = check_type(value, self.__element_type__)
File "container.py", line 78, in check_type
raise TypeError('%s cannot be coerced to %s' % (value, _type))
TypeError: 20 cannot be coerced to <type int between 1 and 10>

I've also got a range-bounded float type, and a length-limited string type.

All of these types have automatic translations into SQL, so to make them persistent you don't have to do any work at all. They automatically generate the right SQL tables and insert/update commands to store and retrieve themselves from a SQL database.

You can't do this in Common Lisp because built-in types are not part of the CLOS hierarchy (or, to be more specific, built-in types do not have standard-object as their meta-type).

You can actually do a surprising range of macro-like things within Python syntax. For example, my ORM has a defstruct function for defining structure types with statically-typed slots. It's syntax uses Python's keyword syntax to define slots, e.g.:

defstruct('mystruct', x=1, y=2.3, z=float)

This defines a structure type with three slots. X is constrained to be an integer with default value of 1. Y is constrained to be a float with default value of 2.3. Z is constrained to be a float, but because it has no default value it can also be None.

If you look as SQLAlchemy and Elixir they take this sort of technique to some really scary extremes.

I like Python's slice notation and negative indexing. Being able to write x[1:-1] is a lot nicer than (subseq x 1 (- (length x) 1)).

I like the fact that hash tables have a built-in syntax. I also like the fact that they are hash tables is mostly hidden behind an abstraction.

I like list comprehensions and iterators.

A lot of these things can be added to Common Lisp without too much trouble. (I actually have CL libraries for iterators and abstract associative maps.) But some of them can't, at least not easily.

Just for the record, there are a lot of things I don't like about Python, starting with the fact that it isn't Lisp. I think syntactically-significant whitespace is a *terrible* idea. (I use a programming style that inserts PASS statements wherever they are needed so that emacs auto-indents everything correctly.) I don't like the fact that importing from modules imports values rather than bindings. And I don't like the fact that I can't optionally declare types in order to make my code run faster.

What are programming languages for?

Somewhat to my dismay, a usenet post that I wrote six years ago has suddenly been getting a lot more attention than it deserves. But since it seems to have generated so much interest I decided to write a followup to clarify some of the confusion that the original post generated.

I was struggling with how to organize that followup when I serendipitously saw that Paul Graham had posted a refutation of my criticisms of Arc. I must say that, given the volume of discussion about Arc, that Paul chose my particular criticism as worthy of refutation really made my day. And reading Paul's posting, as so often happens, helped some of my own ideas gel.

The title of this post is taken from a 1989 paper by Phil Agre and David Chapman entitled What are Plans For? Unfortunately, that paper seems to have fallen into obscurity, but it had an enormous influence on me at the time. The paper questioned the then-canonical view that plans are essentially programs to be executed more or less open-loop by a not-very-interesting (from an academic point of view) execution engine. It proposed an alternate point of view that plans should be considered as generalized information resources for a complex (and therefore academically interesting) execution engine. That led to a number of researchers (myself included) more or less contemporaneously putting this idea into practice, which was widely regarded at the time as considerable progress.

The point of this story is that sometimes the best way to make progress is not to just dig in and work, but to step back and question fundamental assumptions. In this case, I think it's worthwhile asking the question: what are programming languages for? Because there are a lot of tacit (and some explicit) answers to that question that I think are actually constraining progress.

The question is not often asked because the answer at first blush seems to be obvious: programming languages are for writing programs. (Duh!) And so the obvious metric for evaluating the quality of a programming language is: how easy does it make the job of writing programs? On this view, Paul's foundational assertion seems entirely reasonable:


I used what might seem a rather mundane test: I worked on things that would make programs shorter. Why would I do that? Because making programs short is what high level languages are for. It may not be 100% accurate to say the power of a programming language is in inverse proportion to the length of programs written in it, but it's damned close.


If you accept this premise, then Paul's approach of building a language by starting with Lisp and essentially Huffman-coding it makes perfect sense. If shorter-is-better, then getting rid of the odd extraneous paren can be a big win.

The reason Paul and I disagree is not that I question his reasoning, but that I question his premise. Shorter is certainly better all else being equal, but all else is not equal. I submit that there are (or at least ought to be) far more important considerations than brevity for a programming language, especially a programming language designed, as Arc is, for exploratory programming.

One indication that Paul's premise is wrong is to push it to its logical conclusion. For concise programs it's really hard to beat APL. But the drawback to APL is so immediately evident that the mere mention of the language is usually enough to refute the extreme version of the short-is-better argument: APL programs are completely inscrutable, and hence unmaintainable. And so conciseness has to be balanced at least to some extent with legibility. Paul subscribes to Abelson and Sussman's admonition that "Programs should be written for people to read, and only incidentally for machines to execute." In fact, Paul believes this so strongly that he thinks that the code should serve as the program's specification. (Can't find the citation at the moment.)

So there's this very delicate balance to be struck between brevity and legibility, and no possible principle for how to strike it because these are incommensurate quantities. Is it really better to shrink SETF down to one character (=) and DEFINE and DEFMACRO down to 3 each (DEF and MAC) than the other way around? For that matter, why have DEF at all? Why not just use = to define everything?

There's an even more fundamental problem here, and that is that legibility is a function of a person's knowledge state. Most people find APL code inscrutable, but not APL programmers. Text from *any* language, programming or otherwise, is inscrutable until you know the language. So even if you could somehow come up with a way to measure the benefits in legibility of the costs of making a program longer, where the local maximum was would depend on who was doing the reading. Not only that, but it would change over time as the reader got more proficient. (Or less. There was a time in my life when I knew how to solve partial differential equations, but I look back at my old homework and it looks like gobbledygook. And yet it's in my handwriting. Gives you some idea of how old I am. We actually wrote things with our hands when I was in school.)

There's another problem with the shorter-is-better premise, which is that the brevity of a program is much more dependent on the available libraries than on the structure of the language. If what you want to do is part of an available library then the code you have to write can be very short indeed, even if you're writing in Cobol (which is notoriously wordy). Contrariwise, a web server in APL would probably be an awful lot of work, notwithstanding that the language is the very caricature of concision.

I submit that what you want from a programming language is not one that makes programs shorter, but one that makes programs easier to create. Note that I did not say easier to write, because writing is only one part of creating a program. In fact, it is far from clear that writing is invariably the best way to create a program. (In fact, it is not entirely clear that the whole concept of program is even a useful one but that is a topic for another day.) The other day I built a program that does some fairly sophisticated image processing, and I did it without writing even a single line of code. I did it using Quartz Composer, and if you haven't ever tried it you really should. It is quite the eye-opening experience. In ten minutes I was able to build a program that would have taken me weeks or months (possibly years) to do any other way.

Now, I am not saying that Quartz Composer is the Right Thing. I am actually not much of a fan of visual programming languages. (In fact, I am in certain circles a notorious critic of UML, which I consider one of the biggest steps backward in the history of software engineering.) I only want to suggest that the Right Thing for creating programs, whatever it turns out to be, may involve an interaction of some form other than typing text. But if you adopt shorter-is-better as your premise you completely close the door in even considering that as a possibility, because your metric is only applicable to text.

There is another fundamental reason for questioning shorter-is-better, especially for exploratory programming. Exploratory programming by definition is programming where you expect to have to change things after you have written them. Doesn't it make sense then to take that into account when choosing a quality metric for a language designed to support exploratory programming? And yet, Paul writes:


The real test of Arc—and any other general-purpose high level language—is not whether it contains feature x or solves problem y, but how long programs are in it.


Built in to this is the tacit assumption that a shorter program is inherently easier to change, I suppose because there's simply less typing involved. But this is clearly not true. Haskell is also a very concise language, but making changes to Haskell code is notoriously difficult. (For that matter, writing Haskell code to begin with is notoriously difficult.)

Cataloging all the language features that potentially make change easier would take me far afield here. My purpose here is just to point out that the source of the disagreement between me and Paul is simply the premise that shorter-is-better. Paul accepts that premise. I don't.

So what are programming languages for? They are (or IMO should be) for making the creation of programs easier. Sometimes that means making them shorter so you can do less typing, but I submit that that is a very superficial criterion, and not one that is likely by itself to serve you well in the long run. Sometimes investing a little more typing can pay dividends down the road, like making you do less typing when you change your mind and decide to use a hash table instead of an association list.

One thing that many people found unsatisfying about my how-I-lost-my-faith posting is that I never really got around to explaining why I lost my faith other than saying that I saw people being productive in other languages. Sorry to disappoint, but that was basically it. What I think needs clarification is exactly what faith I lost. I did not lose faith in Lisp in the sense that it stopped being my favorite programming language. It didn't (notwithstanding that I switched to Python for certain things -- more on that in a moment). What I lost faith in was that Lisp was the best programming language for everyone (and everything), and that the only reason that people didn't use Lisp is that they were basically ignorant. My faith was that once people discovered Lisp then they would flock to it. Some people (hi Kenny!) still believe that. I don't.

The reason I switched to Python was that, for me, given the totality of the circumstances at the time, it was (and still is, though that may be changing) easier for me to build web sites in Python than it was in Lisp. And one of the big reasons for that had nothing to do with the language per se. It had to do with this. 90% of the time when I need to do something in Python all I have to do is go to that page and in two minutes I can find that someone has already done it for me.

Now, Lispers invariably counter that Lisp has all these libraries too, and they may be right. But the overall experience of trying to access library functionality in Python versus Lisp is night and day because of Python's "batteries included" philosophy. To access library functionality in Lisp I first have to find it, which is no small task. Then I often have to choose between several competing implementations. Then I have to download it, install it, find out that it's dependent on half a dozen other libraries and find and install those, then figure out why it doesn't work with my particular implementation... it's a freakin' nightmare. With Python I just type "import ..." and it Just Works. And yes, I know that Python can do this only because it's a single-implementation language, but that's beside the point. As a user, I don't care why Python can do something, I just care that it can.

(BTW, having adopted Python, I find that the language itself actually has an awful lot to recommend it, and that there is a lot that the Lisp world could learn from Python. But that's a topic for another day.)

Let me close by reiterating that I have the highest respect for Paul. I admire what he's doing and I wish him much success (and I really mean that -- I'm not just saying it because I'm angling for an invitation to a YC lunch). But I really do think that he's squandering a tremendous opportunity to make the world a better place by basing his work on a false premise.

UPDATE: A correction and a clarification:

A lot of people have commented that making changes to Haskell code is not hard. I concede the point. I was writing in a hurry, and I should have chosen a better example (Perl regexps perhaps).

Others have pointed out that Paul's program-length metric is node count, not character count, and so APL is not a fair comparison. I have two responses to that. First, APL code is quite short even in terms of node count. Second, Paul may *say* he's only interested in node count, but the names he's chosen for things so far indicate that he's interested in parsimony at the character level as well (otherwise why not e.g. spell out the word "optional" instead of simply using the letter "o"?)

In any case, even node count is a red herring because it begs the question of where you draw the line between "language" and "library" and "program" (and, for that matter, what you consider a node). I can trivially win the Arc challenge by defining a new language (let's call it RG) which is written in Arc in the same way that Arc is written in Scheme. RG consists entirely of one macro: (mac arc-challenge-in-one-node () '([insert code for Arc challenge here])) Now the RG code for the Arc challenge consists of one node, so RG wins over Arc.

And to those who howl in protest that that is cheating I say: yes, that is precisely my point. RG is to Arc exactly what Arc is to Scheme. There's a lot of stuff behind the scenes that allows the Arc challenge code in Arc to be as short as it is, and (and this is the important point) it's all specific to the particular kind of task that the Arc challenge is. Here's a different kind of challenge to illustrate the point: write a program that takes a stream of images from a video camera, does edge-detection on those images at frame rates, and displays the results. Using Quartz Composer I was able to do that in about ten minutes with zero lines of code. By Paul's metric, that makes Quartz Composer infinitely more powerful than Arc (or any other programming language for that matter).

So the Arc challenge proves nothing, except that Arc has a wizzy library for writing certain kinds of web applications. But it's that *library* that's cool, not the language that it's written in. A proper Arc challenge would be to reproduce that library in, say, Common Lisp or Python, and compare how much effort that took.

Friday, February 01, 2008

What Arc gets right

I thought I'd balance out my earlier criticism of Arc with some quick thoughts on what I like about Arc.

The main thing I like about Arc is that it's a Lisp, which is to say, it uses S-expression syntax to express programs as lists. I agree with Paul that this is one of the most colossally good ideas ever to emerge from a human brain. S-expressions are the Right Thing.

The next thing I think I like about Arc (I say "I think" because this capability is only hinted at in the current documentation) is the way it tries to integrate into the web, and in particular, the way it seems to fold control-flow for web pages into the control flow for the language (like this for example). The seems to have grown out of Paul's viaweb days when he first used the idea of storing web continuations as closures. Integrating that idea into the language and having the compiler automagically generate and manage those closures behind the scenes could be a really big win.

I like the general idea of getting rid of unnecessary parens, though I think this is not as big a deal as Paul seems to think it is.

I like the general idea of making programs concise, though I think this is fraught with peril (APL anyone?) and Arc could do a better job in this regard than it does. (More on that in a later post.)

I think the kerfuffle about Arc's lack of support for unicode is much ado about nothing. That's obviously a first-draft issue, and easy enough to fix.

Finally, I like that Arc is generating a lot of buzz. Getting people interested in Lisp -- any Lisp -- is a Good Thing.