Wednesday, February 27, 2008
I suppose they think setting the building on fire for an evacuation drill would be a good idea too
An armed man who burst into a classroom at Elizabeth City State University was role-playing in an emergency response drill, but neither the students nor assistant professor Jingbin Wang knew that.
The Friday drill, in which a mock gunman threatened panicked students in the American foreign policy class with death, prompted university officials to apologize this week to Wang and offer counseling to faculty and students.
I suspect they're going to be offering a lot more than counseling before this is through.
Complete story [here].
Friday, February 22, 2008
The meaning of innovation
Oh, and this is pretty damn clever too. Things like this give me real hope for the future.
Thursday, February 14, 2008
The litigant that would not die
I don't normally wish my fellow investors ill, but in this case I'll make an exception and say that whoever put up the cash to bring SCO back from the dead deserves to lose every penny. SCO is a parasite, and it should be expunged form the world along with tapeworms, mosquitoes, and malaria protozoa.
CLS
CLS derives its name from the scene in the classic movie The Wizard of Oz where we are introduced to the Cowardly Lion. Not having the courage to stand up to Dorothy, Scarecrow and the Tin Man, the Cowardly Lion decides to prove his manliness (his Lionliness?) by chasing after Dorothy's little dog Toto instead.
CLS is rampant, particularly in American politics. The most recent example is Congress, not having the courage to stand up to the Bush Administration or the Telcos, decide to chase after Roger Clemens instead. Before that it was Rush Limbaugh and Terri Schiavo and gay people and cancer patients.
Unfortunately, there is not yet a drug approved by the FDA to treat Cowardly Lion Syndrome. I wonder if Congress could work up the courage to fund a study.
Saturday, February 09, 2008
As ye sow...
On that theory, we should be having a Joseph Welch moment one of these days. Maybe this is it.
Thursday, February 07, 2008
What's the right quality metric?
"The reason I'm focusing on the region between axioms and libraries is that, from the programmer's point of view, these operators are the language. These are what your programs are made of. If Lisp were a house, these operators would be the front door, the living room sofa, the kitchen table. Small variations in their design can greatly affect how well the language works.
I've taken a lot of heat for focusing on this"
For the record, I think focusing on this is absolutely the right thing to do, and I applaud Paul for standing up to criticism and doing it. The only thing I disagree with him on is this:
By definition, there is some optimal path from axioms up to a complete language.
This is true only with respect to a particular quality metric, and I think that the quality metric Paul has chosen is brkn.
What do I suggest instead? Glad you asked.
First, let's go back and revisit briefly the question of what programming languages are for. Paul takes as an axiom that programming languages are for making programs shorter but I say they are for making programs easier to create and maintain, which may or may not be the same thing as making them shorter. Certainly, all else being equal, shorter is better, but all else is rarely equal. Sometimes adding length buys you something, and you have to make the tough call about whether the benefit is worth the cost.
It is ironic that Paul would choose such a simplistic metric after going to such great lengths to make the point that programming is like painting. I think this is a great insight, but one of the conclusions is that assessing the quality of a program, just like assessing the quality of a painting, is not such an easy thing to do. Paul's quality metric for programs is analogous to assessing the quality of a painting by counting the number of brushstrokes, and just as absurd. It is a genuine puzzle that Paul of all people needs to be convinced of this.
Programming is all about translating thoughts in people's brains into bits. There is a huge impedance mismatch between brains and bits, and programming languages help to bridge the gap. But there is no more a one-size-fits-all quality metric for programming languages than there is for paintings, or anything else that involves thoughts. There will always be different strokes for different folks.
That is not to say that we need to descend into the black hole of postmodern relativism and say that it's all just a matter of opinion and interpretation. I think there is a criterion that can be used to effectively compare one language against another, but it is very tricky to state. Here's the intuition:
If language A can do everything that language B can do, and language A can do something than language B cannot do, then language A is superior to language B.
(Note that this criterion can produce only a partial-ordering of language quality, but I think that is both inevitable and OK.)
The problem with this as stated is that all languages are Turing-equivalent, so from a strictly mathematical point of view all programming languages are the same. What I mean by "cannot do" is what cannot be done without transcending the framework of the language. Here are some examples:
1. C cannot throw exceptions.
2. Common Lisp cannot support Arc's composition operator (both because the interpretation of an unescaped colon within a symbol name is hard-coded in the standard to have a particular meaning, and because the semantics of function calls are defined within the standard in such a way that calling a composed function would be a syntax error).
3. Arc cannot (as it currently stands) dereference an array in constant time.
4. You can't add new control constructs to Python.
The obvious drawback to my criterion as stated is that it prefers large, unwieldly, kitchen-sink languages to small, elegant languages. So I'll adopt a version of Paul's minimalist metric as a secondary criterion:
If two languages support essentially the same feature set, but language A is in some sense "smaller" than language B, then language A is superior to language B.
In other words, parsimony is desirable, but only secondarily to features (or, more precisely, a lack of constraints). On this view, it is clear why macros are such a big win: macros are a sort of "meta-feature" that let you add new features, and so any language with macros has a huge leg up (in terms of my quality metric) over any language that doesn't.
So why am I so reticent about Arc? After all, Arc is pretty small, but it has macros, so it should be able to subsume other languages. And because it will subsume other languages in a minimalist way it will be the best language. Right?
Well, maybe. There are a couple of problems.
First, not all macro systems are created equal, and Arc's macro system has some known problems. How serious those problems are in practice is, perhaps, an open issue, but my quality metric doesn't take such things into account. A language that solves a problem is in my view better than a language that leaves it up to the user to solve that problem, even if that problem doesn't arise very often. (I dislike Python's syntactically-significant whitespace for the same reason, and despite the fact that it mostly works in practice.)
Second, there is more than one kind of macro. Common Lisp, for example, has symbol macros, which turn out to be tremendously useful for implementing things like module systems, and reader macros which let you change the lexical surface syntax of the language if you want to, and compiler macros which let you give the compiler hints about how to make your code more efficient without having to change the code itself.
Finally, there are language features like type systems which, I am told by people who claim to understand them (I'm not one of them), are quite spiffy and let you do all manner of cool things.
That's the real challenge of designing the 100-year language. In order to make sure that you're not designing yet another Blub you may need to be able to support features that you don't actually understand, or maybe have never even heard of (or maybe haven't even been invented yet). To think otherwise is, IMO, to deny the vast richness of programming, indeed of mathematics itself.
This is totally unacceptable

The New York Times reports:
"An article about the Prophet Muhammad in the English-language Wikipedia has become the subject of an online protest in the last few weeks because of its representations of Muhammad, taken from medieval manuscripts.
In addition to numerous e-mail messages sent to Wikipedia.org, an online petition cites a prohibition in Islam on images of people.
The petition has more than 80,000 “signatures,” though many who submitted them to ThePetitionSite.com, remained anonymous.
“We have been noticing a lot more similar sounding, similar looking e-mails beginning mid-January,” said Jay Walsh, a spokesman for the Wikimedia Foundation in San Francisco, which administers the various online encyclopedias in more than 250 languages.
A Frequently Asked Questions page explains the site’s polite but firm refusal to remove the images: “Since Wikipedia is an encyclopedia with the goal of representing all topics from a neutral point of view, Wikipedia is not censored for the benefit of any particular group.”
The notes left on the petition site come from all over the world. “It’s totally unacceptable to print the Prophet’s picture,” Saadia Bukhari from Pakistan wrote in a message. “It shows insensitivity towards Muslim feelings and should be removed immediately.”"
I applaud Wikipedia for taking a firm stand on this. In solidarity with them I am posting one of the images in question hosted on my own server. It is totally unacceptable to ask for this image to be removed. It shows insensitivity towards the feelings of tolerant and freedom-loving people throughout the world (to say nothing of bad theology in light of the fact that the image in question was painted by a Muslim!) The demand that this image be removed should be withdrawn immediately.
Z SHRTR BTTR?
Central to this flurry of activity is Paul Graham's quality metric, that shorter (in terms of node count) is better. In fact, Paul explicitly says "making programs short is what high level languages are for."
I have already posted an argument against this position, citing APL as an example of a very concise language that is nonetheless not widely considered to be the be-all-and-end-all of high-level languages (except, perhaps, among its adherents). Here I want to examine Paul's assumption on his own terms, and see what happens if you really take his quality metric seriously on its own terms.
Paul defines his quality metric at the end of this essay:
The most meaningful test of the length of a program is not lines or characters but the size of the codetree-- the tree you'd need to represent the source. The Arc example has a codetree of 23 nodes: 15 leaves/tokens + 8 interior nodes.
He is careful to use nodes rather than characters or lines of code in order to exclude obvious absurdities like single-character variable and function names (despite the fact that he has a clear tendency towards parsimony even in this regard). So let's take a look at Arc and see if we can improve it.
One of the distinguishing features of Arc compared to other Lisps is that it has two different binding forms that do the same thing, one for the single-variable case (LET) and another for the multiple-variable case (WITH). This was done because Paul examined a large corpus of Lisp source code and found that the traditional Lisp LET was used most of the time to bind only a single variable, and so it was worthwhile making this a special case.
But it turns out that we can achieve the exact same savings WITHOUT making the single-variable case special. Let's take a traditional Lisp LET form:
(let ((var1 expr1) (var2 expr2) ...) form1 form2 ... formN)
and just eliminate the parentheses:
(let var1 expr1 var2 expr2 ... form1 form2 ... formN)
At first glance this doesn't work because we have no way of knowing where the var-expr pairs stop and the body forms start. But actually this is not true. We CAN parse this if we observe that If there is more than one body form, then the first body form MUST be a combination (i.e. not an atom) in order to be semantically meaningful. So we can tell where the var/expr pairs stop and the body forms begin by looking for the first form in a varN position that is either 1) not an atom or 2) not followed by another form. If you format it properly it doesn't even look half bad:
(let var1 expr1
var2 expr2
...
(form1 ...)
(form2 ...)
)
OR
(let var1 expr1
var2 expr2
...
result)
In order to avoid confusion, I'm going to rename this construct BINDING-BLOCK (BB for short). BB is, by Paul's metric, an unambiguous improvement in the design of ARC because it's the same length (in terms of nodes) when used in the single-variable case, and it's a node shorter when used in the multiple-variable case.
In fact, we can do even better by observing that any symbol in the body of a binding block that is not at the end can be unambiguously considered to be a binding because otherwise it would not be semantically meaningful. So we can simplify the following case:
(bb var1 expr1
(do-some-computation)
(bb var2 expr2
(do-some-more-computation)
....
to:
(bb var1 expr1
(do-some-computation)
var2 expr2
(do-some-more-computation)
....
Again, this is an unambiguous win because every time we avoid a nested binding block we save two nodes.
But notice that something is happening that ought to make us feel just a teensy bit queasy about all these supposed "improvements": our code is starting to look not so much like Lisp. It's getting harder to parse, both for machines and humans. Logical sub-blocks are no longer delimited by parentheses, so it's harder to pick out where they start and end.
Still, it's not completely untenable to argue that binding blocks really are an improvement. (For one thing, they will probably appeal to parenthophobes.) I personally have mixed feelings about them. I kind of like how they can keep the code from creeping off to the right side of the screen, but I also like being able to e.g. select a sub-expression by double-clicking on a close-paren.
Can we do even better? It would seem that with binding-block we have squeezed every possible extraneous node out of the code. There simply are no more parentheses we can get rid of without making the code ambiguous. Well, it turns out that's not true. There is something else we can get rid of to make the code even shorter. You might want to see if you can figure out what it is before reading on.
Here's a hint: Arc already includes a construct [...] which is short for (fn (_) ...).
Why stop at one implied variable? Why not have [...] be short for (fn ((o _) (o _1) (o _2) ...) ...)? Again, this is an unambiguous improvement by Paul's quality metric because it allows us to replace ANY anonymous function with a shorter version using implied variable names, not just anonymous functions with one variable.
And now we should be feeling very queasy indeed. On Paul's quality metric, all those variable names are expensive (every one requires a node) and yet I hope I don't have to convince you that getting rid of them is not an unalloyed good. If you doubt this, consider that we can use the same argument convention for non-anonymous functions as well, and replace the lambda-list with a single number indicating the number of arguments:
(def foo 3 (do-some-computation-on _1 _2 _3))
Again, an unambiguous improvement by Paul's metric.
We can extend this to binding blocks as well, where we replace variable names with numbers as well. In fact, we don't even need to be explicit about the numbers! We can just implicitly bind the result of every form in a binding block to an implicitly named variable! Again, on Paul's quality metric this would be an unambiguous improvement in the language design.
All this is not as absurd as it might appear. If we push this line of reasoning to its logical conclusion we would end up with a language that very closely resembled Forth. Forth is a fine language. It has its fans. It's particularly good at producing very small code for use on very small embedded processors, so if you need to program an 8-bit microcontroller with 4k of RAM it's not a bad choice at all. But God help you if you need to debug a Forth program. If you think Perl is write-only, you ain't seen nothin' yet.
The flaw in the reasoning is obvious: adding nodes to program source has a cost to be sure, but it can also provide benefits in terms of readability, flexibility, maintainability, editability, reusability... and there's no way to make those tradeoffs unambiguously because they are incommensurate quantities. That is why language design is hard.
Now, I don't really mean to rag on Arc. I only want to point out that if Arc succeeds (and I think it could, and I hope it does) it won't be because it made programs as short as they can possibly be. It will be because it struck some balance between brevity and other factors that appeals to a big enough audience to reach critical mass. And I think Paul is smart enough and charismatic enough to gather that audience despite the fact that there might be a discrepancy between Paul's rhetoric and Arc's reality. But IMO the world would be a better place (and Arc would be a better language) if the complex tradeoffs of language design were taken more seriously.
[UPDATE:]
Sami Samhuri asks:
How do you account for the following case?
(let i 0
(x y) (list 1 2)
(prn "i is " i)
(prn "x is " x)
(prn "y is " y))
Well, the easiest way is (bb i 0 x 1 y 2 ...
;-)
Or, less glibly: (bb i 0 L (something-that-returns-a-list) x (1st L) y (2nd L) ...
Of course, if you want to support destructuring-bind directly you need some kind of marker to distinguish trees of variables from forms. In this case, Arc's WITH syntax, which essentially uses a set of parens to be that marker, becomes tenable again (though it doesn't actually support destructuring even though the syntax could easily be extended to support it). Another possibility is to overload the square-bracket syntax, since that also would be a no-op in a binding position. Now things start to get even more complicated to parse, but hey, shorter is better, right?
But what happens if some day you decide that multiple values would be a spiffy feature to add to the language? Now you're back to needing a marker.
Personally, in my BB macro (in Common Lisp) use the :mv and :db keywords as markers for multiple-value-bind and destructuring-bind. So your example would be:
(bb i 0
:db (x y) (list 1 2)
; and just for good measure:
:mv (q s) (round x y)
...
I also use a prefix "$" character to designate dynamic bindings, which tends to send CL purists into conniptions. :-)
And to top it off, I use the :with keyword to fold in (with-X ...) forms as well. The result is some very un-Lispy-looking Lisp, but which shorter-is-better advocates ought to find appealing.
Here's the code for BB in case you're interested:
;;; Binding Block
(defmacro bb (&rest body)
(cond
((null (rst body)) (fst body))
((consp (1st body))
`(progn ,(1st body) (bb ,@(rst body))))
((not (symbolp (1st body)))
(error "~S is not a valid variable name" (1st body)))
((eq (1st body) ':mv)
(if (symbolp (2nd body))
`(let ((,(2nd body) (multiple-value-list ,(3rd body))))
(bb ,@(rrrst body)))
`(multiple-value-bind ,(2nd body) ,(3rd body)
(bb ,@(rrrst body)))))
((eq (1st body) :db)
`(destructuring-bind ,(2nd body) ,(3rd body)
(declare (special ,@(find-specials (2nd body))))
(bb ,@(rrrst body))))
((eq (1st body) :with)
`(,(concatenate-symbol 'with- (2nd body)) ,(3rd body) (bb ,@(rrrst body))))
((keywordp (1st body))
(error "~S is not a valid binding keyword" (1st body)))
(t `(let ((,(1st body) ,(2nd body)))
(declare (special ,@(find-specials (1st body))))
(bb ,@(rrst body))))))
Note that it would be straightforward to extend this to allow users to define their own binding keywords, which would really help things to spin wildly out of control :-)
Sunday, February 03, 2008
The joy of iterators
(defun print-elements (thing)
(etypecase thing
(list (loop for element in thing do (print element)))
(vector (loop for element across thing do (print element))))
This is not particularly elegant. The amount of repetition between the two clauses in the etypecase is pretty annoying. We need to copy almost the entire LOOP clause just so we can change "in" to "across". How can we make this prettier?
The usual fix for repetitive code is to use a macro, e.g.:
(defmacro my-loop1 (var keyword thing &body body)
`(loop for ,var ,keyword ,thing do ,@body))
But this hardly helps at all (and arguably makes the situation worse):
(defun print-elements (thing)
(etypecase thing
(list (my-loop1 element in thing (print element)))
(vector (my-loop1 element across thing (print element))))
We could do this:
(defmacro my-loop2 (keyword)
`(loop for element ,keyword thing do (print element)))
(defun print-elements (thing)
(etypecase thing
(list (my-loop2 in)
(vector (my-loop2 across))))
But that's not a very good solution either. We're not likely to get much re-use out of my-loop2, the code overall is not much shorter, and it feels horribly contrived.
Here's another attempt, taking advantage of the fact that Common Lisp has a few functions built-in that are generic to sequences (which include lists and vectors):
(defun print-elements (thing)
(loop for i from 0 below (length thing) do (print (elt thing i))))
That looks better, but it's O(n^2) when applied to a list, which is not so good. And in any case it completely falls apart when we suddenly get a new requirement:
Extend print-elements-in-order so that it can accept a hash table, and prints out the hash-table's keyword-value pairs.
Now the first solution is suddenly looking pretty good, because it's the easiest to extend to deal with this new requirement. We just have to add another line to the etypecase:
(defun print-elements (thing)
(etypecase thing
(list (loop for element in thing do (print element)))
(vector (loop for element across thing do (print element)))
(hash-table (loop for key being the hash-keys of thing do (print (list key (gethash key thing)))))))
Now we get yet another requirement:
Extend print-elements so that it can accept a file input stream, and print out the lines of the file.
I'll leave that one as an exercise.
Now, the situation seems more or less under control here, with one minor problem: every time we get a new requirement of this sort we need to change the print-elements function. That's not necessarily a problem, except that if we have more than one programmer working on implementing these requirements we have to be sure that they don't stomp on each other as they edit print-elements (but modern revision-control systems should be able to handle that).
But now we get this requirement:
Extend print-elements so that it takes as an optional argument an integer N and prints the elements N at a time.
Now all our hard work completely falls apart, because the assumption that the elements are to be printed one at a time is woven deeply into the fabric of our code.
Wouldn't it be nice if the LOOP macro just automatically did the Right Thing for us, so that we could just write, e.g.:
(defun print-elements (thing &optional (n 1))
(loop for elements in (n-at-a-time thing n) do (print elements)))
and be done with it? Can we make that happen? Yes, we can. Here's how:
(defconstant +iterend+ (make-symbol "ITERATION_END"))
(defmacro for (var in thing &body body)
(with-gensym itervar
`(let ( (,itervar (iterator ,thing)) )
(loop for ,var = (funcall ,itervar)
until (eq ,var +iterend+)
,@body))))
(defmethod iterator ((l list))
(fn () (if l (pop l) +iterend+)))
(defmethod iterator ((v vector))
(let ( (len (length v)) (cnt 0) )
(fn () (if (< cnt len)
(prog1 (elt v cnt) (incf cnt))
+iterend+))))
(defun print-elements (thing)
(for element in thing do (print element)))
The overall code is much longer than our original solution, but I claim that it is nonetheless a huge win. Why? Because it is so much easier to extend. In order to make print-elements work on new data types all we have to do is define an iterator method on that data type which returns a closure that conforms to the iteration protocol that we have (implicitly for now) defined. So, for example, to handle streams all we have to do is:
(define-method (iterator (s stream)) (fn () (read-char s nil +iterend+)))
We don't have to change any existing code. And in particular, we don't need to redefine print-elements.
Furthermore, we can now do this neat trick:
(defmacro n-of (form n)
`(loop for #.(gensym "I") from 1 to ,n collect ,form))
(defun n-at-a-time (n thing)
(let ( (iter (iterator thing)) )
(fn () (let ((l (n-of (funcall iter) n)))
(if (eq (car l) +iterend+) +iterend+ l)))))
It actually works:
? (for l in (n-at-a-time 2 '(1 2 3 4)) do (print l))
(1 2)
(3 4)
NIL
? (for l in (n-at-a-time 2 #(1 2 3 4)) do (print l))
(1 2)
(3 4)
NIL
Furthermore, n-at-a-time works on ANY data type that has an iterator method defined for it.
There's a fly in the ointment though. Consider this requirement:
Write a function that takes either a list or a vector of integers and destructively increments all of its elements by 1.
We can't do that using the iteration protocol as it currently stands, because it only gives us access to values and not locations. Can we extend the protocol so that we get both? Of course we can, but it's a tricky design issue because there are a couple of different ways to do it. The way I chose was to take advantage of Common Lisp's ability to let functions return multiple values. I extended the FOR macro to accept multiple variables corresponding to those multiple values. By convention, iterators return two values (except for the N-AT-A-TIME meta-iterator): the value, and a key that refers back to the container being iterated over (assuming that makes sense -- some data types, like streams, don't have such keys). This makes the code for the FOR macro and the associated ITERATOR methods quite a bit more complicated:
(defmacro for (var in thing &body body)
(unless (sym= in :in) (warn "expected keyword 'in', got ~A instead" in))
(with-gensym itervar
`(let ( (,itervar (iterator ,thing)) )
,(if (consp var)
`(loop for ,var = (multiple-value-list (funcall ,itervar))
until (eq ,(fst var) +iterend+)
,@body)
`(loop for ,var = (funcall ,itervar)
until (eq ,var +iterend+)
,@body)))))
(define-method (iterator (v vector))
(let ( (len (length v)) (cnt 0) )
(fn () (if (< cnt len)
(multiple-value-prog1 (values (elt v cnt) cnt) (incf cnt))
+iterend+))))
(define-method (iterator (h hash-table))
(let ( (keys (loop for x being the hash-keys of h collect x)) )
(fn ()
(if keys (let ( (k (pop keys)) ) (values k (gethash k h))) +iterend+))))
(defun n-at-a-time (n thing)
(let ( (iter (iterator thing)) )
(fn () (apply 'values (n-of (funcall iter) n)))))
Note that N-AT-A-TIME has actually gotten simpler since we no longer need to check for +iterend+. When the underlying iterator returns +iterend+ it just gets passed through automatically.
All this works in combination with another generic function called REF which is like ELT except that, because it's a generic function, can be extended to work on data types other than lists and vectors. The solution to the last problem is now:
(for (elt key) in thing do (setf (ref thing key) (+ elt 1)))
So we've had to do a lot of typing, but the net result is a huge win in terms of maintainability and flexibility. We can apply this code to any data type for which we can define an iterator and reference method, which means we can swap those types in and out without having to change any code. We can define meta-iterators (like N-AT-A-TIME) to modify how our iterations happen. (By default, the iterator I use for streams iterates over characters, but I have a meta-iterator called LINES that iterates over lines instead. It's easy to define additional meta-iterators that would iterate over paragraphs, or forms, or other semantically meaningful chunks.)
All this code, include the support macros (like with-gensym) is available at http://www.flownet.com/ron/lisp in the file utilities.lisp and dictionary.lisp. (There's lots of other cool stuff in there too, but that will have to wait for another day.)
UPDATE: I mentioned that iterators are not my idea, but didn't provide any references. There are plenty of them, but here are two to get you started if you want to read what people who actually know what they are talking about have to say about this:
http://okmij.org/ftp/Scheme/enumerators-callcc.html
http://okmij.org/ftp/papers/LL3-collections-enumerators.txt
Thanks to sleepingsquirrel for pointing these out.
I also realized after reading these that I probably should have used catch/throw instead of +iterend+ to end iterations.
Finally, it's worth noting that Python has iterators built-in to the language. But if it didn't you couldn't add them yourself like you can in Common Lisp. It can be done in Arc, but it's harder (and IMO less elegant) because ARC does not have generic functions, so you have to essentially re-invent that functionality yourself.
Saturday, February 02, 2008
What Python gets right
I find that the language itself actually has an awful lot to recommend it, and that there is a lot that the Lisp world could learn from Python.
A couple of people asked about that so here goes:
I like the uniformity of the type system, and particularly the fact that types are first-class data structures, and that I can extend primitive types. I used this to build an ORM for a web development system that I used to build the first revision of what eventually became Virgin Charter. The ORM was based on statically-typed extensions to the list and dict data types called listof and dictof. listof and dictof are meta-types which can be instantiated to produce statically typed lists and dicts, e.g.:
>>> listof(int)
<type 'list_of<int>'>
>>> l=listof(int)()
>>> l.append(1)
>>> l.append(1.2)
Warning: coerced 1.2 to <type 'int'>
>>> l.append("foo")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "container.py", line 126, in append
value = check_type(value, self.__element_type__)
File "container.py", line 78, in check_type
raise TypeError('%s cannot be coerced to %s' % (value, _type))
TypeError: foo cannot be coerced to <type 'int'>
>>>
I also extended the built-in int type to make range-bounded integers:
>>> l=listof(rng(1,10))()
>>> l.append(5)
Warning: coerced 5 to <type int between 1 and 10>
>>> l.append(20)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "container.py", line 126, in append
value = check_type(value, self.__element_type__)
File "container.py", line 78, in check_type
raise TypeError('%s cannot be coerced to %s' % (value, _type))
TypeError: 20 cannot be coerced to <type int between 1 and 10>
I've also got a range-bounded float type, and a length-limited string type.
All of these types have automatic translations into SQL, so to make them persistent you don't have to do any work at all. They automatically generate the right SQL tables and insert/update commands to store and retrieve themselves from a SQL database.
You can't do this in Common Lisp because built-in types are not part of the CLOS hierarchy (or, to be more specific, built-in types do not have standard-object as their meta-type).
You can actually do a surprising range of macro-like things within Python syntax. For example, my ORM has a defstruct function for defining structure types with statically-typed slots. It's syntax uses Python's keyword syntax to define slots, e.g.:
defstruct('mystruct', x=1, y=2.3, z=float)
This defines a structure type with three slots. X is constrained to be an integer with default value of 1. Y is constrained to be a float with default value of 2.3. Z is constrained to be a float, but because it has no default value it can also be None.
If you look as SQLAlchemy and Elixir they take this sort of technique to some really scary extremes.
I like Python's slice notation and negative indexing. Being able to write x[1:-1] is a lot nicer than (subseq x 1 (- (length x) 1)).
I like the fact that hash tables have a built-in syntax. I also like the fact that they are hash tables is mostly hidden behind an abstraction.
I like list comprehensions and iterators.
A lot of these things can be added to Common Lisp without too much trouble. (I actually have CL libraries for iterators and abstract associative maps.) But some of them can't, at least not easily.
Just for the record, there are a lot of things I don't like about Python, starting with the fact that it isn't Lisp. I think syntactically-significant whitespace is a *terrible* idea. (I use a programming style that inserts PASS statements wherever they are needed so that emacs auto-indents everything correctly.) I don't like the fact that importing from modules imports values rather than bindings. And I don't like the fact that I can't optionally declare types in order to make my code run faster.
What are programming languages for?
I was struggling with how to organize that followup when I serendipitously saw that Paul Graham had posted a refutation of my criticisms of Arc. I must say that, given the volume of discussion about Arc, that Paul chose my particular criticism as worthy of refutation really made my day. And reading Paul's posting, as so often happens, helped some of my own ideas gel.
The title of this post is taken from a 1989 paper by Phil Agre and David Chapman entitled What are Plans For? Unfortunately, that paper seems to have fallen into obscurity, but it had an enormous influence on me at the time. The paper questioned the then-canonical view that plans are essentially programs to be executed more or less open-loop by a not-very-interesting (from an academic point of view) execution engine. It proposed an alternate point of view that plans should be considered as generalized information resources for a complex (and therefore academically interesting) execution engine. That led to a number of researchers (myself included) more or less contemporaneously putting this idea into practice, which was widely regarded at the time as considerable progress.
The point of this story is that sometimes the best way to make progress is not to just dig in and work, but to step back and question fundamental assumptions. In this case, I think it's worthwhile asking the question: what are programming languages for? Because there are a lot of tacit (and some explicit) answers to that question that I think are actually constraining progress.
The question is not often asked because the answer at first blush seems to be obvious: programming languages are for writing programs. (Duh!) And so the obvious metric for evaluating the quality of a programming language is: how easy does it make the job of writing programs? On this view, Paul's foundational assertion seems entirely reasonable:
I used what might seem a rather mundane test: I worked on things that would make programs shorter. Why would I do that? Because making programs short is what high level languages are for. It may not be 100% accurate to say the power of a programming language is in inverse proportion to the length of programs written in it, but it's damned close.
If you accept this premise, then Paul's approach of building a language by starting with Lisp and essentially Huffman-coding it makes perfect sense. If shorter-is-better, then getting rid of the odd extraneous paren can be a big win.
The reason Paul and I disagree is not that I question his reasoning, but that I question his premise. Shorter is certainly better all else being equal, but all else is not equal. I submit that there are (or at least ought to be) far more important considerations than brevity for a programming language, especially a programming language designed, as Arc is, for exploratory programming.
One indication that Paul's premise is wrong is to push it to its logical conclusion. For concise programs it's really hard to beat APL. But the drawback to APL is so immediately evident that the mere mention of the language is usually enough to refute the extreme version of the short-is-better argument: APL programs are completely inscrutable, and hence unmaintainable. And so conciseness has to be balanced at least to some extent with legibility. Paul subscribes to Abelson and Sussman's admonition that "Programs should be written for people to read, and only incidentally for machines to execute." In fact, Paul believes this so strongly that he thinks that the code should serve as the program's specification. (Can't find the citation at the moment.)
So there's this very delicate balance to be struck between brevity and legibility, and no possible principle for how to strike it because these are incommensurate quantities. Is it really better to shrink SETF down to one character (=) and DEFINE and DEFMACRO down to 3 each (DEF and MAC) than the other way around? For that matter, why have DEF at all? Why not just use = to define everything?
There's an even more fundamental problem here, and that is that legibility is a function of a person's knowledge state. Most people find APL code inscrutable, but not APL programmers. Text from *any* language, programming or otherwise, is inscrutable until you know the language. So even if you could somehow come up with a way to measure the benefits in legibility of the costs of making a program longer, where the local maximum was would depend on who was doing the reading. Not only that, but it would change over time as the reader got more proficient. (Or less. There was a time in my life when I knew how to solve partial differential equations, but I look back at my old homework and it looks like gobbledygook. And yet it's in my handwriting. Gives you some idea of how old I am. We actually wrote things with our hands when I was in school.)
There's another problem with the shorter-is-better premise, which is that the brevity of a program is much more dependent on the available libraries than on the structure of the language. If what you want to do is part of an available library then the code you have to write can be very short indeed, even if you're writing in Cobol (which is notoriously wordy). Contrariwise, a web server in APL would probably be an awful lot of work, notwithstanding that the language is the very caricature of concision.
I submit that what you want from a programming language is not one that makes programs shorter, but one that makes programs easier to create. Note that I did not say easier to write, because writing is only one part of creating a program. In fact, it is far from clear that writing is invariably the best way to create a program. (In fact, it is not entirely clear that the whole concept of program is even a useful one but that is a topic for another day.) The other day I built a program that does some fairly sophisticated image processing, and I did it without writing even a single line of code. I did it using Quartz Composer, and if you haven't ever tried it you really should. It is quite the eye-opening experience. In ten minutes I was able to build a program that would have taken me weeks or months (possibly years) to do any other way.
Now, I am not saying that Quartz Composer is the Right Thing. I am actually not much of a fan of visual programming languages. (In fact, I am in certain circles a notorious critic of UML, which I consider one of the biggest steps backward in the history of software engineering.) I only want to suggest that the Right Thing for creating programs, whatever it turns out to be, may involve an interaction of some form other than typing text. But if you adopt shorter-is-better as your premise you completely close the door in even considering that as a possibility, because your metric is only applicable to text.
There is another fundamental reason for questioning shorter-is-better, especially for exploratory programming. Exploratory programming by definition is programming where you expect to have to change things after you have written them. Doesn't it make sense then to take that into account when choosing a quality metric for a language designed to support exploratory programming? And yet, Paul writes:
The real test of Arc—and any other general-purpose high level language—is not whether it contains feature x or solves problem y, but how long programs are in it.
Built in to this is the tacit assumption that a shorter program is inherently easier to change, I suppose because there's simply less typing involved. But this is clearly not true. Haskell is also a very concise language, but making changes to Haskell code is notoriously difficult. (For that matter, writing Haskell code to begin with is notoriously difficult.)
Cataloging all the language features that potentially make change easier would take me far afield here. My purpose here is just to point out that the source of the disagreement between me and Paul is simply the premise that shorter-is-better. Paul accepts that premise. I don't.
So what are programming languages for? They are (or IMO should be) for making the creation of programs easier. Sometimes that means making them shorter so you can do less typing, but I submit that that is a very superficial criterion, and not one that is likely by itself to serve you well in the long run. Sometimes investing a little more typing can pay dividends down the road, like making you do less typing when you change your mind and decide to use a hash table instead of an association list.
One thing that many people found unsatisfying about my how-I-lost-my-faith posting is that I never really got around to explaining why I lost my faith other than saying that I saw people being productive in other languages. Sorry to disappoint, but that was basically it. What I think needs clarification is exactly what faith I lost. I did not lose faith in Lisp in the sense that it stopped being my favorite programming language. It didn't (notwithstanding that I switched to Python for certain things -- more on that in a moment). What I lost faith in was that Lisp was the best programming language for everyone (and everything), and that the only reason that people didn't use Lisp is that they were basically ignorant. My faith was that once people discovered Lisp then they would flock to it. Some people (hi Kenny!) still believe that. I don't.
The reason I switched to Python was that, for me, given the totality of the circumstances at the time, it was (and still is, though that may be changing) easier for me to build web sites in Python than it was in Lisp. And one of the big reasons for that had nothing to do with the language per se. It had to do with this. 90% of the time when I need to do something in Python all I have to do is go to that page and in two minutes I can find that someone has already done it for me.
Now, Lispers invariably counter that Lisp has all these libraries too, and they may be right. But the overall experience of trying to access library functionality in Python versus Lisp is night and day because of Python's "batteries included" philosophy. To access library functionality in Lisp I first have to find it, which is no small task. Then I often have to choose between several competing implementations. Then I have to download it, install it, find out that it's dependent on half a dozen other libraries and find and install those, then figure out why it doesn't work with my particular implementation... it's a freakin' nightmare. With Python I just type "import ..." and it Just Works. And yes, I know that Python can do this only because it's a single-implementation language, but that's beside the point. As a user, I don't care why Python can do something, I just care that it can.
(BTW, having adopted Python, I find that the language itself actually has an awful lot to recommend it, and that there is a lot that the Lisp world could learn from Python. But that's a topic for another day.)
Let me close by reiterating that I have the highest respect for Paul. I admire what he's doing and I wish him much success (and I really mean that -- I'm not just saying it because I'm angling for an invitation to a YC lunch). But I really do think that he's squandering a tremendous opportunity to make the world a better place by basing his work on a false premise.
UPDATE: A correction and a clarification:
A lot of people have commented that making changes to Haskell code is not hard. I concede the point. I was writing in a hurry, and I should have chosen a better example (Perl regexps perhaps).
Others have pointed out that Paul's program-length metric is node count, not character count, and so APL is not a fair comparison. I have two responses to that. First, APL code is quite short even in terms of node count. Second, Paul may *say* he's only interested in node count, but the names he's chosen for things so far indicate that he's interested in parsimony at the character level as well (otherwise why not e.g. spell out the word "optional" instead of simply using the letter "o"?)
In any case, even node count is a red herring because it begs the question of where you draw the line between "language" and "library" and "program" (and, for that matter, what you consider a node). I can trivially win the Arc challenge by defining a new language (let's call it RG) which is written in Arc in the same way that Arc is written in Scheme. RG consists entirely of one macro: (mac arc-challenge-in-one-node () '([insert code for Arc challenge here])) Now the RG code for the Arc challenge consists of one node, so RG wins over Arc.
And to those who howl in protest that that is cheating I say: yes, that is precisely my point. RG is to Arc exactly what Arc is to Scheme. There's a lot of stuff behind the scenes that allows the Arc challenge code in Arc to be as short as it is, and (and this is the important point) it's all specific to the particular kind of task that the Arc challenge is. Here's a different kind of challenge to illustrate the point: write a program that takes a stream of images from a video camera, does edge-detection on those images at frame rates, and displays the results. Using Quartz Composer I was able to do that in about ten minutes with zero lines of code. By Paul's metric, that makes Quartz Composer infinitely more powerful than Arc (or any other programming language for that matter).
So the Arc challenge proves nothing, except that Arc has a wizzy library for writing certain kinds of web applications. But it's that *library* that's cool, not the language that it's written in. A proper Arc challenge would be to reproduce that library in, say, Common Lisp or Python, and compare how much effort that took.
Friday, February 01, 2008
What Arc gets right
The main thing I like about Arc is that it's a Lisp, which is to say, it uses S-expression syntax to express programs as lists. I agree with Paul that this is one of the most colossally good ideas ever to emerge from a human brain. S-expressions are the Right Thing.
The next thing I think I like about Arc (I say "I think" because this capability is only hinted at in the current documentation) is the way it tries to integrate into the web, and in particular, the way it seems to fold control-flow for web pages into the control flow for the language (like this for example). The seems to have grown out of Paul's viaweb days when he first used the idea of storing web continuations as closures. Integrating that idea into the language and having the compiler automagically generate and manage those closures behind the scenes could be a really big win.
I like the general idea of getting rid of unnecessary parens, though I think this is not as big a deal as Paul seems to think it is.
I like the general idea of making programs concise, though I think this is fraught with peril (APL anyone?) and Arc could do a better job in this regard than it does. (More on that in a later post.)
I think the kerfuffle about Arc's lack of support for unicode is much ado about nothing. That's obviously a first-draft issue, and easy enough to fix.
Finally, I like that Arc is generating a lot of buzz. Getting people interested in Lisp -- any Lisp -- is a Good Thing.
Thursday, January 31, 2008
My take on Arc
Some quick background: I am as big a fan of Lisp as you could ever hope to find. I've been using Lisp since 1979 (my first Lisp was P-Lisp on an Apple ][ ) and I used it almost exclusively for over twenty years until I lost my faith and switched, reluctantly, to Python (and I was not alone). Recently I have taken up Lisp again since the release of Clozure Common Lisp. I am proud of the fact that my login on Reddit, YC News and Arclanguage.org is Lisper.
Furthermore, I am a huge Paul Graham fan. I think his essays are brilliant. I think Y Combinator is brilliant (to the point where I'm seriously considering moving from LA to the Silicon Valley just so I can go hang out there). Paul is the kind of guy I wish I could be but can't.
And to round out the preliminaries and disclaimers, I am mindful of the fact that the current release of Arc is a first draft, and it's never possible to live up to the hype.
I think all the enthusiasm and buzz about Arc is wonderful, but I am concerned what will happen if people start to think that there's no there there. If Arc doesn't save Lisp it's hard to imagine what would. And unfortunately, I think Arc has some quite serious problems.
The biggest problem with Arc is that it is at the moment not much more than a (very) thin layer on top of Scheme. Now, that would be OK if it were the right thin layer, but I don't think it is. Arc's design path is well-trod, and the pitfalls that lie upon it are mostly well known.
For example, Arc is 1) a Lisp-1 that 2) uses unhygienic macros and 3) does not have a module system. This is bound to lead to problems when programs get big, and not because people forget to put gensyms (which Arc dubs "uniqs") in the right place (although I predict that will be a problem too). The problem is that in a Lisp-1, local variable bindings can shadow global function names, and so if you use a macro M that references a global function F in a context where F is shadowed then M will fail. If you're lucky you'll get an error. If you're not lucky your program will just do some random weird thing. Hygienic macros were not invented just because some intellectuals in an ivory tower wanted to engage in some mathematical masturbation. This is a real problem, and the larger your code base the more real it becomes.
I cite this problem first because macros are, according to Paul Graham, the raison d'etre for Lisp. Macros are the reason for putting up with all those irritating parentheses. And macros in Arc are broken.
Unfortunately, it gets worse.
Arc is supposed to be a language for "exploratory programming" so it's supposed to save you from premature commitments. From the Arc tutorial:
Lists are useful in exploratory programming because they're so flexible. You don't have to commit in advance to exactly what a list represents. For example, you can use a list of two numbers to represent a point on a plane. Some would think it more proper to define a point object with two fields, x and y. But if you use lists to represent points, then when you expand your program to deal with n dimensions, all you have to do is make the new code default to zero for missing coordinates, and any remaining planar code will continue to work.
There are a number of problems with this. First, the kind of flexibility that Paul describes here is not unique to lists. You could accomplish the exact same thing with, for example, a Python object with named slots (which is just a thin wrapper for an abstract associative map -- note that I did not say hash table here. More on this later.) You could have 2-D points with X and Y slots, and 3-D points with X, Y and Z slots. You could even do the 2-D points-return-zero-for-their-nonexistent-Z-slot trick by redefining the __getattr__ method for the 2D point class. Python objects are every bit as flexible as lists except that it's a lot easier to figure out that a <2-D point instance> is a two-dimensional point than a list with two elements. (You can't even assume that a 2-D point will be a list of two numbers, because Paul goes on to suggest:
Or if you decide to expand in another direction and allow partially evaluated points, you can start using symbols representing variables as components of points, and once again, all the existing code will continue to work.
"All of your existing code will continue to work" only if your existing code isn't built with the tacit assumption that the coordinates of a point are numbers. If you've tried to do math on those coordinates then you're out of luck.
Which brings me to my next point...
Lisp lists are quite flexible, but they are not infinitely malleable. They are a "leaky abstraction." The fact that Lisp lists are linked lists (and not, for example, vectors, as Python lists are) is famously exposed by the fact that CDR is a primitive (and non-consing) operation. Make no mistake, linked lists are monstrously useful, but there are some things for which they are not well suited. In particular, Nth is an O(n) operation on linked lists, which means that if you want to do anything that involves random access to the elements of a list then your code will be slow. Paul recognizes this, and provides hash tables as a primitive data structure in Arc (the lack of which has been a notable shortfall of Scheme). But then he backpedals and advocates association lists as well:
This is called an association list, or alist for short. I once thought alists were just a hack, but there are many things you can do with them that you can't do with hash tables, including sort them, build them up incrementally in recursive functions, have several that share the same tail, and preserve old values.
First, hash tables can be sorted, at least in the sense that associated lists can be sorted. Just get a list of the keys and sort them. Or create a sorted-hash-table that maintains an adjunct sorted list of keys. This is not rocket science. But that is not the problem.
The problem is that the functions for accessing association lists are different from those used to access hash tables. That means that if you write code using one you cannot pass in the other, which completely undermines the whole idea of using Arc as an exploratory language. Arc forces you into a premature optimization here.
The Right Thing if you want to support exploratory programming (which to me means not force programmers to make premature commitments and optimizations) is to provide an abstract associative map whose underlying implementation can be changed. To make this work you have to commit to a protocol for associative maps that an implementation must adhere to. The trouble is that designing such a protocol is not such an easy thing to do. It involves compromises. For example, what should the implementation do if an attempt is made to dereference a non-existent key? Throw an exception? Return some unique canonical value? Return a user-supplied default? Each possibility has advantages and disadvantages. The heavy lifting in language design is making these kinds of choices despite the fact that there is no One Right Answer (or even if there is, that it may not be readily apparent).
And that is my main gripe about Arc: it has been so long in the making and set such lofty goals and then it seems to pretty much punt on all the hard problems of language design.
Now, as I said, I am mindful of the fact that this is just a first draft. But some Big Decisions do seem to have been made. In particular, it seems a safe bet that Arc will not have an OO layer, which means no generic functions, no abstract data types, and hence no way to build reliable protocols of the sort that would be needed to eliminate the kinds of forced premature optimizations that Arc currently embraces. It also seems a safe bet that it will remain a Lisp-1 without hygienic macros (because Paul seems to regard both hygiene and multiple name spaces as Horrible Hacks). Whether it gets a module system remains to be seen, but it seems doubtful. If you think designing a protocol for abstract associative maps is hard, you ain't seen nothin' yet.
So there it is. Paul, if you're reading this, I'm sorry to harsh on your baby. I really hope Arc succeeds. But I gotta call 'em as I see 'em.
Thursday, January 24, 2008
What the fuck is wrong with this country?
A mother whose two teenage daughters were placed in an orphanage when she fell ill during a post-Christmas shopping trip to New York has been told she is under investigation because her children were taken into care.
Yvonne Bray, took her daughters Gemma, 15, and Katie, 13, to New York shortly after Christmas for a shopping trip but was taken into hospital when she fell ill with pneumonia during their visit.
The girls were then told they could not wait at the hospital and as minors would have to be taken into care.
Social workers took them to a municipal orphanage in downtown Manhattan, where they were separated, strip-searched and questioned before being kept under lock and key for the next 30 hours.
The two sisters were made to shower in front of security staff and told to fill out a two-page form with questions including: "Have you ever been the victim of rape?" and "Do you have homicidal tendencies?"
One question asked "are you in a street gang?" to which Gemma replied: "I'm a member of Appledore library."
Their clothes, money and belongings were taken and they were issued with regulation white T-shirt and jeans. Katie said: "It was like being in a little cage. I tried to go to sleep, but every time I opened my eyes, someone was looking right at me."
Eventually Bray discharged herself, and - still dressed in hospital pyjamas - tracked down the girls.
She said: "It is absolutely horrendous that two young girls were put through an ordeal like that. They were made to answer traumatic questions about things they don't really understand and spend over 24 hours under surveillance."
Since returning home, Bray has received a letter from the US Administration for Children and Families, notifying her that, because the children were admitted to the orphanage, she is now "under investigation."
I've copied the entire article in blatant disregard of fair-use doctrine because this is one story that really ought not be lost to a stale link.
As far as I can tell, not a single US media outlet has picked up the story. Not that I'm surprised by that.
Monday, January 14, 2008
Ford's new strategy: treat your customers like shit
...a law firm representing Ford contacted [the Black Mustang Club] saying that our calendar pics (and our club's event logos - anything with one of our cars in it) infringes on Ford's trademarks which include the use of images of THEIR vehicles. Also, Ford claims that all the images, logos and designs OUR graphics team made for the BMC events using Danni are theirs as well.
<irony>
Seems like an effective business strategy to me. Sure makes me want to run right out and buy a Mustang.
</irony>
John McCain was right. Some of those jobs aren't coming back to Detroit.
A Really Bad Idea (tm)
"In Sony's vision of the future, any two consumer devices will be able to exchange data wirelessly with one another simply by holding them close together. The system is designed for maximum ease of use, which means limited options for controlling the transfers; devices will transfer their contents automatically to another device within range."
Someone at Sony didn't think this through.
Friday, January 11, 2008
Nice work if you can get it
Of course, it's really the Countrywide board of directors that is to blame here. I could have bankrupted Countrywide just as well as Mozilo did (if not better), and I would have happily done it for a mere $50 million.
Let me see your papers redux
Monday, January 07, 2008
Good news, bad news...
Bad news is it's marijuana.
Wednesday, January 02, 2008
If the swastika fits...
John Deady, the co-chair of New Hampshire Veterans for Rudy [Giuliani], is standing by the comments he made in the controversial interview with The Guardian we posted on below, in which he said that "the Muslims" need to be chased "back to their caves."
Here's the scary bit:
When I asked Deady to elaborate on his suggestion that we need to "get rid" of Muslims, Deady said:
"When I say get rid of them, I wasn't necessarily referring to genocide....
Wasn't necessarily referring to genocide? So he might have been referring to genocide?
Even the German Nazis (it's a sad commentary on the state of the world that I need to qualify the term now) were more discrete about their plans to murder all the Jews than this bozo.
Of course, the really scary thing is not John Deady, it's that he can say things like this and not get run out of town on a rail. Does no one remember that we fought a war not too long ago against people like this?
Friday, December 28, 2007
A letter to my house guest
I actually thought for a while that we were going to get along. When you said you believed in freedom and small government I was right there with you. But then you told me that you supported George Bush. How could that be? I asked. You say you support small government, but George Bush has expanded both the size and the power of the government more than any other president in history, and yet you support him? Yes, you said, because George Bush is a man of God, and because he is a man of God, whatever he does must be good. He doesn't lie, because lying is not Godly, and George Bush is a man of God.
OK, well, I may not agree with what you say, but I will fight to the death for your right to say it.
Then you told me about the cancer, and I sympathized. And you told me about how hard it was to get laetrille, and I sympathized, because even though I don't think it's going to do you any good, I think you have the right to put whatever you want into your own body free from government interference. I thought you would agree. But when I asked you if you thought marijuana ought to be illegal you said yes, and when I asked you why you said because marijuana is a mind-altering substance.
OK, well, I may not agree with what you say and all that...
But then I asked you if you thought alcohol ought to be illegal. You took another sip of your 2003 Old Vines Zinfandel and said that alcohol should be legal. But alcohol is a mind-altering substance too, I said. And with a smirk on your face you replied, "I'm not consistent."
Well, I'm sorry, but that is not OK with me. Because what that means is that you, sir, are not a man of principle. You wrap yourself in the Bible and the flag and speak of duty and honor and natural law, when the fact of the matter is that the only thing that guides you is your own desires. Laetrille and alcohol ought to be legal in your mind not because of any principle, but simply because you want to consume laetrille and alcohol. You complain about liberals being self-centered, when the fact of the matter is that in your mind it's really all about *you*. You support George Bush because he's promised to make *you* safe. By whatever means necessary.
That is not OK with me.
I told you that my grandparents' generation fled Germany for Palestine in the early 1930's. You thought I was a Jew. Hitler would have agreed with you. But I do not consider myself a Jew, and neither did many of my ancestors. I didn't tell you this, but when the Gestapo came knocking on my maternal grandfather's door it came as a shock to him. He actually had no idea that he was Jewish. He thought of himself as German through-and-through. (I know this because I interviewed him a few years before he died and he told me so. I taped the interview. I'd put an audio clip up so you can hear him say it in his own words, but he says it in German so you wouldn't understand it anyway. And besides, you don't read blogs.)
Hitler came to power in much the same way that George Bush did. He was democratically elected. And he then proceeded to do much of what George Bush has done: dismantle the rule of law in favor of a cult of personality based on a promise of security, except that the bad guys back then weren't Al Qaeda and illegal immigrants, they were the Gypsies and the mentally retarded and the homosexuals. Oh, and the Jews. Let's not forget the Jews.
I actually pointed this out to you (more gently, because as I said, I didn't want to make a scene) but I think it went straight over your head: Hitler enjoyed enormous popular support in Germany, much more than George Bush is having (because, frankly, Hitler was a hell of a lot smarter than Dubya). The point is that although Hitler is nowadays considered the very paragon of evil, he was not regarded that way by his contemporaries. In fact, even some of George Bush's ancestors were supporters. Hitler was regarded by most Germans at the time as a great leader, a great patriot, a courageous man who restored Germany's strength and restored her rightful place in the world after the humiliating defeat of World War I. (And if he'd been just a little less reckless he might still be regarded that way today.)
Evil often comes wrapped in a flag and carrying a Bible.
And no, I'm not talking about George Bush. I'm talking about you. Because you wrap yourself in the mantle of principle, but when it comes to the real test you don't live your life according to principle, you live it according to your own desires. You want your Zinfandel and your laetrille and to impose your narrow-minded and bigoted view of the world on everybody else with the force of arms.
And you would deny your own son the right to marry the person he loves just because it doesn't fit your notion of "natural law". (You didn't see the irony when I asked if you thought contraception should be illegal and you said, "Of course not." I didn't really expect you to. But I wasn't surprised. Of course contraception should be legal because, after all, *you* want to be able to use it! And it's all about you, isn't it?) You cause unnecessary suffering to further your own selfish desires and don't bat an eye. In fact, you're proud of it. That, to me, is the definition of evil.
Maybe you get some credit for marching for civil rights back in the 50's, but today, sir, you are a bigot and a hypocrite.
And that is not OK with me.
I welcomed you into my home at the request of your son, who, as I told you, is one of the finest human beings I have ever had the privilege to know. I would share a foxhole with him any day of the week before I would share another drink with you. I welcomed you into my home and you spent the evening spewing your vile right-wing fascist bile while never once asking my opinion about anything. I wonder, if I had behaved that way in your home, would you have extended me the same courtesy? II'd give long odds against it.
I welcomed you into my home and I was civil to you. but make no mistake, I didn't do it for you. I did it for your son.
You don't deserve him.
Peace on earth
"BETHLEHEM, West Bank (AP) -- Greek Orthodox and Armenian priests attacked each other with brooms and stones inside the Church of the Nativity..."
On Christmas. Gotta love the irony.
Tuesday, December 11, 2007
Monday, December 10, 2007
Whack! (Whinny!) Whack! (Whinny!) Whack! Whack! Whack!
Flynn points out that scores in some of the categories—those measuring general knowledge, say, or vocabulary or the ability to do basic arithmetic—have risen only modestly over time. The big gains on the WISC are largely in the category known as “similarities,” where you get questions such as “In what way are ‘dogs’ and ‘rabbits’ alike?” Today, we tend to give what, for the purposes of I.Q. tests, is the right answer: dogs and rabbits are both mammals. A nineteenth-century American would have said that “you use dogs to hunt rabbits.”
“If the everyday world is your cognitive home, it is not natural to detach abstractions and logic and the hypothetical from their concrete referents,” Flynn writes. Our great-grandparents may have been perfectly intelligent. But they would have done poorly on I.Q. tests because they did not participate in the twentieth century’s great cognitive revolution, in which we learned to sort experience according to a new set of abstract categories. In Flynn’s phrase, we have now had to put on “scientific spectacles,” which enable us to make sense of the WISC questions about similarities. To say that Dutch I.Q. scores rose substantially between 1952 and 1982 was another way of saying that the Netherlands in 1982 was, in at least certain respects, much more cognitively demanding than the Netherlands in 1952. An I.Q., in other words, measures not so much how smart we are as how modern we are.
...
[Flynn] looked first at [Richard] Lynn’s data, and realized that the comparison was skewed. Lynn was comparing American I.Q. estimates based on a representative sample of schoolchildren with Japanese estimates based on an upper-income, heavily urban sample. Recalculated, the Japanese average came in not at 106.6 but at 99.2. Then Flynn turned his attention to the Chinese-American estimates. They turned out to be based on a 1975 study in San Francisco’s Chinatown using something called the Lorge-Thorndike Intelligence Test. But the Lorge-Thorndike test was normed in the nineteen-fifties. For children in the nineteen-seventies, it would have been a piece of cake. When the Chinese-American scores were reassessed using up-to-date intelligence metrics, Flynn found, they came in at 97 verbal and 100 nonverbal. Chinese-Americans had slightly lower I.Q.s than white Americans.
...
Flynn took a different approach. The black-white gap, he pointed out, differs dramatically by age. He noted that the tests we have for measuring the cognitive functioning of infants, though admittedly crude, show the races to be almost the same. By age four, the average black I.Q. is 95.4—only four and a half points behind the average white I.Q. Then the real gap emerges: from age four through twenty-four, blacks lose six-tenths of a point a year, until their scores settle at 83.4.
That steady decline, Flynn said, did not resemble the usual pattern of genetic influence. Instead, it was exactly what you would expect, given the disparate cognitive environments that whites and blacks encounter as they grow older.
...
Flynn then talked about what we’ve learned from studies of adoption and mixed-race children—and that evidence didn’t fit a genetic model, either. If I.Q. is innate, it shouldn’t make a difference whether it’s a mixed-race child’s mother or father who is black. But it does: children with a white mother and a black father have an eight-point I.Q. advantage over those with a black mother and a white father. And it shouldn’t make much of a difference where a mixed-race child is born. But, again, it does: the children fathered by black American G.I.s in postwar Germany and brought up by their German mothers have the same I.Q.s as the children of white American G.I.s and German mothers.
Saturday, December 08, 2007
The Democrats Just Don't Get It
"Angry congressional Democrats demanded Friday that the Justice Department investigate why the CIA destroyed videotapes of the interrogation of two terrorism suspects."
Forgive me, but I'm just not very optimistic that having the Justice Department investigate this will do any good. The JD, even (perhaps especially) with Michael Mukasey at the helm, is just as much a lap dog for the administration as the Republicans in Congress. No one gets into this administration without being vetted for loyalty to the Party and the Dear Leader (or perhaps I should spell that deer leader?) This administration is a pseudo-christian cult, and an investigation will do about as much good as having the Church of Scientology investigate itself.
Where are the Congressional subpoenas? Where are the contempt-of-congress indictments for the White House's refusal to comply? The Dems' attitude reminds me of Marvin the Martian.
Wednesday, December 05, 2007
"Proof" that blacks are less intelligent than whites
"Black Americans are 10 times more likely to be imprisoned for illegal drug offenses than whites, even though both groups use and sell drugs at the same rate, according to a study released on Tuesday."
This must be because blacks are genetically less intelligent than whites. What other explanation could there possibly be? I guess Dennis Bider was right all along. I'm so sorry, Dennis. Please come back. I miss you so much.
Saturday, December 01, 2007
So much for the free market
Too bad Denis isn't around any more. I'd love to hear what he had to say about this.
Friday, November 30, 2007
Today's immigrant bashers are the children of illegal immigrants
Let me tell you what I think of you pathetic immigrant bashers. You and your families have no right to be here. You are the descendents of liars, thieves, and genocidal murderers. Your ancestors have no honor. We gave you help, food and shelter when you needed it, and guided Lewis and Clark across the continent. In return, you broke every promise you ever made, shot us in back whenever you could, cut down the forests, killed the wildlife, and stole everything that was not nailed down.
Laws, treaties, boundaries, borders, and promises meant nothing to you if you thought that, as a white man, you deserved to have it. Gold on our sovereign land, here comes the white man! Shortcut to the West, we don't need to pay any damn Indians no damn toll fees. There is very little moral difference between your ancestors' actions and some gang member who is helping himself to your grandma's wallet. A squatter is a squatter. So I tell you squatters to get off your high horse.
I think it is worth noting that Kelly admonishes the immigrant bashers to get off their high horse, but not to go back where they came from.
Barbarians
"Thousands of protesters, many brandishing clubs and swords, took to the streets of Sudan’s capital Friday, demanding the execution of a British teacher who let her students name a teddy bear Muhammad."
You need to bury your head pretty deep in the sand to say today that Islam is a religion of peace.
It may be a good thing that no one reads my blog, or this post might actually be putting my life at risk.
Too little too late
"The President has no authority to unilaterally attack Iran and if he does, as foreign relations committee chairman, I will move to impeach."
Terrific. You're going to wait until after we've gotten sucked into yet another quagmire in the middle east to impeach this bastard? This isn't closing the barn door after the horses have left, this is waiting until they've ridden over the horizon to start walking towards the barn.
Sheesh.
The abstinence-only folks are going to love this
"While past research has linked early sexual activity to health problems, a new study suggests that waiting too long to start having sex carries risks of its own. Those who lose their virginity at a later age -- around 21 to 23 years of age -- tend to be more likely to experience sexual dysfunction problems "
Wednesday, November 28, 2007
Can you say "entrapment"?
A small victory for civil rights
U.S. prosecutors have withdrawn a subpoena seeking the identities of thousands of people who bought used books through online retailer Amazon.com Inc., newly unsealed court records show.
The withdrawal came after a judge ruled the customers have a right to keep their reading habits from the government.
In one of the most poetic metaphors I have ever read in a legal ruling, Judge Stephen Crocker wrote:
"The (subpoena's) chilling effect on expressive e-commerce would frost keyboards across America."
Tuesday, November 27, 2007
Here's one we can all agree on
Sunday, November 25, 2007
Neigh!!!
Psychogenetic fallacy:
Flynn is one of those people who helped identify something new and fundamental, and then go on living their lives denying the best explanation because they would like it to be different.
Ad hominem:
I've long since accepted that you won't be converted to the rational outlook. You are a believer; your spiritual well-being rests on whether a certain hypothesis is right.
Straw man:
Stop trying to persuade everyone how unbiased you are
(I have never tried to persuade anyone that I am unbiased. To the contrary, I have been very up-front about my biases.)
I'm not quite sure how to categorize this one:
People form a hypothesis based on what they would like reality to be; not based on what the naked facts tell them; and so they spend decades trying to find out that group selection for lower populations would favor individual restraint in breeding, only to eventually find out that group selection for lower populations leads to cannibalism victimizing especially young females.
How on earth did we get from the genetics of IQ to cannibalism? Argument by gibberish? Non-sequitur? Maybe I'll just call that one the razzle dazzle.
Here's another one that's challenging to categorize:
The Flynn effect is best explained by heterosis
Even if it were true, just because heterosis is the best currently available explanation doesn't mean it is correct. (Ironically, even Bider himself concedes in the post he links to that heterosis is a hypothesis that has not been adequately tested.) The history of science is rife with examples of "best explanations" that turned out to be wrong.
Bider's second comment is the most colossal straw man I have ever encountered (and that's saying something). I think it says something about Bider's insecurity in his position that he feels the need to not only put words in my mouth but thoughts in my brain and feelings in my heart. This one is particularly ironic:
You are a person who has been cursed with an intolerance for other people's suffering.
because people who actually know me consistently tell me that one of my problems is that I don't empathize enough with those around me.
Finally,
Did desegregation of schools significantly narrow the black-white educational gap? [No, therefore] so much for environment.
This is another straw man, and a truly offensive one. I have never argued (and would never argue) that exposure to people with white skin is the aspect of the environment most responsible for anyone's academic success or lack thereof.
Don Geddis writes:
First of all, the article you link to doesn't support your summary.
My summary was that Flynn believes that "the evidence supporting the hypothesis that intelligence is primarily genetic is weak." I suppose I should have been more careful to distinguish, as Flynn does, between direct and indirect genetic effects. For example, there is no question that skin color is genetic, so if skin color interacts with some environmental factor (like societal bias) to produce some effect one could say that's a genetic influence. Technically it would be true, but I think that would be a perversion of what people generally mean when they say a trait is genetic. It's certainly not what I mean when I say that the evidence that IQ is genetic is weak, and I'm pretty sure Flynn would agree.
Once you get to average, US, major metropolitan levels of education, nutrition, etc., then the IQ variation due to genetics approaches 100%. It is not the case that the smarter kids were read to more, or went to the museum more often, or watched TV less. It is the case that the smarter kids typically had smarter parents.
If this were really true then that would convince me. But I doubt it's really true. In particular, as I have pointed out before, it is impossible to do a properly controlled study to test the effect of race on IQ because it is impossible to control for the societal bias to skin color. The only way to test the hypothesis is to use two racial groups whose IQ's supposedly differ but whose outward appearance is the same, like jewish and non-Jewish caucasians. (You'd need to have both groups raised in similar cultural circumstances, i.e. either all as Jews or all as non-Jews -- and preferably randomized across both cases.) Until you've done a study like that all you have is more pink flamingos.
Here's another Flynn example that is worth pondering: If on the basis of their genetic inheritance, separated-twin pairs are tall, quick, and athletically inclined, both members are likely to be interested in basketball, practice assiduously, play better, and eventually attract the attention of basketball coaches capable of transforming them into world-class competitors. Other twin pairs, in contrast, endowed with shared genes that predispose them to be shorter and stodgier than average will display little aptitude or enthusiasm for playing basketball and will end up as spectators rather than as players.
The trouble with basketball as an analogy to IQ is that height is an easily observable physical trait that obviously has a causal relationship with potential success as a basketball player. Furthermore, height has both genetic and environmental factors. All this is known and uncontroversial. The trouble is, whether or not externally observable and heritable traits are similarly correlated with intelligence is precisely what is at issue here. So there are no lessons to be drawn from the basketball analogy for the matter at hand (except that it is easy to oversimplify).
Just for the record let me make it clear where I stand. I do not dispute that IQ could be genetic. In fact, I think it almost certainly has some genetic component. The question is how much is genetic and how much is environmental, and, importantly, how much is due to complex interplays between genes and environmental and societal factors, and as long as we're being exhaustive, how much is due to the imprecision and multi-dimensionality of intelligence, and how much is just plain random. My position is merely that the currently available data do not justify the conclusion that direct genetic factors (i.e. the direct transcription of DNA into proteins that build bigger or more effective brains) are the dominant factor. Note that I do not say that this is not the case, merely that the data we have don't justify the conclusion. I will also say, as I have said before, that I do think it would be unfortunate if we did have conclusive data to support this position, and that the human condition would on the whole be the worse for it. And people's eagerness to adopt the conclusion that intelligence is genetic, and to vilify anyone who doesn't join them in their prejudice, even in the absence of conclusive evidence does nothing to dissuade me from this belief.