First, the idea that "code is data" is a popular aphorism, but even the most casual reflection on what this slogan actually means will reveal that it can't possibly be the right answer. Yes, it is true that, in Lisp, code is data. But this is true of all programming languages! If programs weren't data, how could a compiler possibly work?
What makes Lisp cool is not that programs are data (because all programs are data), but that they are a particular kind of data. In most programming languages, programs are strings. Strings are in fact data. In Lisp, programs are not strings, they are linked lists (that happen to have a string representation). And this turns out to make all the difference.
I want to be very clear about what I mean when I say that Lisp programs are linked lists, because this is really a very subtle point. It's hard to explain, which is one of the reasons that it is very rarely explained well. Ironically, part of the problem is that once you understand it, it seems trivial and obvious. (Everything is easy once you know how.) But if you don't already understand it, it can be hard to get over the hump. So depending on which side of this divide you fall on, what I am about to say might sound like I'm belaboring the obvious, in which case I would ask you try to remember back to the time before you understood all this (I know there was such a time because no one is born understanding linked lists).
The fundamental problem with trying to explain this is that the only tool I have at my disposal to communicate with you is text. Your eyes scan this page, parse the black markings on the white background, and interpret those markings as letters and punctuation marks. Your mind then further groups those letters into words, words into sentences, and sentences into concepts. You do all this effortlessly now, but back in the day, before you knew how to read, it was hard work. It takes a similar kind of "hard work" to read Lisp. But like learning to read natural language, it pays dividends.
So here's a little snippet of Lisp code:
(defun rsq (x y) (sqrt (plus (times x x) (times y y))))
This code defines a function called RSQ which computes the square root of the sum of the squares of two numbers, but that is beside the point. What matters is that there are two very different ways to interpret the combination of letters and punctuation marks on the line above:
1. As a string of characters. 44 of them to be precise (if you count spaces).
2. As a thing with structure that is defined by the parentheses.
This is a little easier to see if we write something that has the same structure but without the evocative words:
(a b (c d) (e (f (g c c) (g d d))))
This makes the structure a little easier to see. What you are looking at can be interpreted as either a string of characters (35 of them in this case), or as a list of more abstract elements. This particular list has four elements. The first element is the letter "a". The second element is the letter "b". But the third element is not a letter, it is another list. This list has two elements, each of which is a letter ("c" and "d"). The fourth element is also a list. This one has three elements, two of which are themselves lists.
There is an extra wrinkle in the original example, which is that sequences of adjacent letters like "defun" and "sqrt" are also considered "one thing", or a single element of the list. So the original example, like the second, is also a list of four elements, but the first element is not a single letter, but a "group" of letters. In Lisp these groups are called "symbols", and like lists, they are first-class data types.
The reason this is hard to explain is that strings and lists are fundamentally different things even though they look the same when you write them out this way. What I've written above are really strings, but your brain interprets those strings as lists once you've been trained to interpret the parentheses and the letter groupings and spaces in the right way. But what a linked list really is is something completely different. It's a pattern of bits in memory. You can talk about that by dumping the contents of memory and talking about how some bit patterns can be interpreted as pointers that refer to other parts of memory, or by drawing boxes and arrows.
But all of those details are a distraction too. What really matters is that by thinking of code as a linked list instead of as a string of characters you can manipulate that code easily in terms of components that are semantically meaningful.
Here's an example of what I mean by this. Consider the following snippet of C code:
int main(int argc, char* argv[]) { ...
Now suppose you want to analyze this code. We want to extract, say, the name of the function being defined ("main") and its arguments ("argc" and "argv"). In C this is an advanced exercise; you have to actually parse the code. But in Lisp it is utterly trivial. If I consider this code:
(defun rsq (x y) (sqrt (plus (times x x) (times y y))))as a list then to get its name all I have to do is extract the second element of the list. And all I have to do to get the arguments is take the third element. And the functions to do that are built-in to Lisp, so I literally don't have to write any code!
Not only that, but the parsing process that converts the string representation of the list (called an S-expression) to the internal representation of the actual linked list data structure is also trivial. Parsing S-expressions is super easy. You don't need a grammar or a parser generator, all you need is -- and this is no exaggeration -- a few dozen lines of code in just about any programming language [1]. And going the other way -- printing them back out -- is even easier.
This, then, is the magic of Lisp. It's a local minimum in the amount of effort that it takes to parse and manipulate code in semantically meaningful chunks at the "cost" of having to write code that looks a little bit weird when you first encounter it. But this feeling quickly goes away when you realize that this weirdness is not arbitrary. Those parens are where they are for a reason, namely, to make the syntax easy, even trivial, to parse. Lisp was originally proposed with a more traditional syntax in addition to S-expressions, and nearly every Lisp programmer has proposed and implemented their own (it's almost a rite of passage). None of them have ever caught on because S-expressions are a huge win once you get even just a little bit used to them. They let you do things easily that are really really hard in other languages. In particular, they make writing compilers so easy that doing so becomes a regular part of doing business in Lisp rather than an abstruse specialty that only a select few engage in.
And now I have to go fix some code so that it automatically generates a backtrace whenever it encounters an error, logs it, and then continues its computation as if the error had not occurred (because it's running inside an event loop where actually throwing an error would be catastrophic). I expect this will take me about fifteen minutes because I have this in my toolbox.
---
[1] Yes, that's more than a dozen lines of code, but that's because what you see there is a complete Lisp interpreter, not just an S-expression parser. The parser is at the bottom. It's 30 LOC.
Prolog is slightly better because its macro language is the same as the programming language; Lisp macros are a bit different from the programming language. Syntactically, Prolog is what you'd get if LISP wrote foo(a, b) instead of (foo a b) (yes, I know that they're not exactly the same; but they're close enough). Lisp had to invent a syntactic form for macros whereas Prolog has logic variables, pattern-matching and backtracking built in).
ReplyDelete(BTW, you forgot to mention special forms, which are LISP's other trick)
> Prolog is slightly better because its macro language is the same as the programming language; Lisp macros are a bit different from the programming language.
ReplyDeleteNo, that's not true. The macro language for Lisp is Lisp. What makes you think it's any different?
Prolog is just Lisp with unification and backtracking search integrated into the language.
There's a concept in Compiler Theory called an Abstract Syntax Tree wherein a program's text is broken down into a fundamental tree-like representation (very similar to Lisp's S-Expression structure).
ReplyDeleteWhen you're claiming that a program in Lisp is actually a List (and that iterating / parsing it is easy), what you're actually claiming is that it's easy to go from the textual representation in Lisp to the abstract syntax tree representation (and that the AST is actually nothing more than a nested List). Points to that - you're right.
On the flip side, most code is meant to be read by humans (not machines / programs). Code readability matters. What Lisp gains in representation flexibility and transformability, it loses in ease of expression. Again, you're right - Lisp was made to be easily parsed (and it does that job well).
Modern programming languages, however, structure code for humans. They introduce syntax rules to increase the density of expression of lines of code at the cost of AST generation / expressivity / parseability. Here, I agree with the sentiment of your post: Every language should provide an AST parser library, macro and code generator as a first class citizen of the language or standard library (which would achieve the same goal as for lisp) and we can all move on from discussing syntax differences to more interesting things like Category theory. The problem is that they still don't and that should be considered unacceptable by the programming community.
Hopefully, these things will change as the years go by. Python is already shipping with the AST module, there are self-hosted compilers already available for some languages (eg. Javascript). I'm looking forward to the future where such libraries are the norm and not the exception. (end-rant; opinions my own, not my employer's)
>> Code readability matters
ReplyDeleteThis is usually thrown around by non Lispers who have hardly written any substantial amount of Lisp code (@Divye Kapoor, while I say this, I am not accusing you of that, in case it comes across like that).
And if code readability matters, why is it still hard to read code!!! Millions of lines of monstrous code bases all in the name of readability. If we assume Lisp has no code readability (which is completely untrue once you get used to it with a bit few hours of some Lisp code writing with an editor that manages parens for you), then on the flip side these languages with supposed code readability power, have lost out on the expressiveness and power that Lisp provides.
Can you please cite an equivalent problem solved in Lisp and X language (non Lisp family) and show at least anecdotally that Lisp code for the same is not readable?
>> On the flip side, most code is meant to be read by humans (not machines / programs)
At the cost of what, less power, spending hours trying to understand mindless syntax because language designer lacks taste? Most code is also meant to be first up written productively with the right abstractions that are easy to implement, only then does it make more readable. That syntax makes code readable is a very weak argument indeed.
>> On the flip side, most code is meant to be read by humans (not machines / programs) ...
Again that API will most likely be hard to use, not used much at all and if you try fixing it you will end up with another language in the Lisp family!!!
@Divye
ReplyDelete> Modern programming languages, however, structure code for humans. They introduce syntax rules to increase the density of expression of lines of code at the cost of AST generation
Higher density != increased readability. If that were so, APL would be the ultimate programming language. (Also you should read this.)
One of the many cool things about Lisp is that you can change the surface syntax, not just via macros, but also through reader macros. In this way you can embed any syntax you like within Lisp. For example, take a look at this code. Scroll down to the function xpt-add. You will see infix syntax seamlessly integrated into Lisp code. (If you want to see how this trick is done, the secret is here.)
I'm actually pretty obsessed with writing readable code. I find it easier to write readable code in Lisp than any other language precisely because the syntax is so malleable. Take a look at this for example. I think it's pretty readable. (If you don't agree, I'd welcome constructive suggestions on how to improve it.)
> You don't need a grammar
ReplyDeleteThis is a false statement. The only language for which there is no grammar is the trivial case: {}. That is the language which contains no strings; the null set.
> all I have to do is extract the second element of the list
No, you have to parse the string using a parser which is built from a grammar.
It may be easy, or relatively minimal, but it's wrong to say that no parsing occurs.
If homoiconic means that Lisp code is linked lists, then Lisp is not homoiconic, since Lisp programs are strings just like every other programming language.
> > You don't need a grammar
ReplyDelete> This is a false statement.
No, it's a true statement. Take a look at the ANSI Common Lisp spec and try to find the grammar that defines the syntax. There isn't one.
> it's wrong to say that no parsing occurs.
And I didn't say that no parsing occurs. Obviously it does. But it doesn't use a grammar, at least not in Common Lisp.
> If homoiconic means that Lisp code is linked lists
That is not what it means.
> Lisp programs are strings
No, they really aren't. Again, look at the ANSI Common Lisp spec. specifically the section on evaluation, which defines the semantics of the language. You will see that it is defined in terms of *forms*, which are lists, numbers, vectors and symbols, not strings. (Well, OK, strings are forms too, but they just evaluate to themselves.)
> > Lisp programs are strings
ReplyDelete> No, they really aren't.
Yes, they are. When you write a Lisp program, you encode bytes to a file, same as any other programming language.
If you're going to claim that "In most programming languages, programs are strings.", then you can't say that Lisp programs aren't also strings.
> > it's wrong to say that no parsing occurs.
> And I didn't say that no parsing occurs.
Is that not what you meant here?:
> In C this is an advanced exercise; you have to actually parse the code. But in Lisp it is utterly trivial.
This implies that in Lisp, you don't have to "actually parse the code".
> But it doesn't use a grammar, at least not in Common Lisp.
You can't parse without a grammar. How would you know what the rules of parsing are?
The only way to know if a string is in a language without parsing is if you have an exhaustive list of every string in the language, and you check if a given string is equal to any string in that list.
Clearly this isn't happening in that py file, so the parser is using rules (i.e. a grammar) to decide if the string that is currently being parsed is part of the language.
Just because the grammar isn't written out explicitly in EBNF doesn't mean that there's isn't a grammar.
> When you write a Lisp program, you encode bytes to a file
ReplyDeleteThat's one way to write a Lisp program. It's not the only way.
> Is that not what you meant here?:
No. I meant what I said. Go back and re-read my words carefully.
> You can't parse without a grammar
Of course you can. That's manifestly true because Lisp code is parsed, and yet Lisp does not have a grammar (except in a way that renders the claim vacuous).
> How would you know what the rules of parsing are?
What makes you think that you know what the rules are? The Common Lisp parser is *programmable*. (Go look up "reader macros".) You *can't* know what the rules are because the rules can change at run time. To know the rules of Lisp parsing you would have to solve the halting problem.
> The only way to know if a string is in a language without parsing...
Who said anything about "without parsing"? We've already agreed that Lisp is parsed.
> Just because the grammar isn't written out explicitly in EBNF doesn't mean that there's isn't a grammar.
That's true, but it's a vacuous claim because every Turing machine has an equivalent context-sensitive grammar. When people say that a language has a grammar, they mean that the grammar is explicitly written out, almost always in BNF, and that it doesn't change because the grammar is part of the language definition. That is not the case in Lisp. The only invariant in the definition of Lisp is its evaluation rules, and those are defined on forms, not strings. *That* is the key difference between Lisp and all other languages.
> You *can't* know what the rules are because the rules can change at run time.
ReplyDeleteAre you saying that the following can be a valid Lisp program?
)
@Dominic:
ReplyDelete> Are you saying that the following can be a valid Lisp program?
Yes.
Welcome to Clozure Common Lisp Version 1.11-r16812M (DarwinX8664)!
? (set-macro-character #\) (lambda ... ; Filling this in is left as an exercise
...
? )
") is now a valid Lisp program"
Not only does this make close-paren a valid program, it doesn't even interfere with regular Lisp syntax. It just "fills in" a part of the syntax that was previously an error.
? (list 1 2 3) )
(1 2 3)
?
") is now a valid Lisp program"
?
I didn't ask if
ReplyDelete? (set-macro-character #\) (lambda ... ; Filling this in is left as an exercise
...
? )
") is now a valid Lisp program"
? (list 1 2 3) )
can be a valid Lisp program.
I asked if
)
can be a valid Lisp program.
@Dominic:
ReplyDelete> I didn't ask if...
Then you are being a pedant. Life is too short for that.
Rondam Ramblings??
ReplyDelete@Ron:
ReplyDelete> I'm actually pretty obsessed with writing readable code. I find it easier to write readable code in Lisp than any other language precisely because the syntax is so malleable. Take a look at this for example. I think it's pretty readable. (If you don't agree, I'd welcome constructive suggestions on how to improve it.)
What's the start-up cost for tracing macros when you have 1, 2, or 3 levels of macros/rewriting? For other work on this you could see OMeta, developed by Ian Piumarta in 2007 under Alan Kay's Viewpoints Research Institute. If you look at VPRI's 2007 STEPS Toward The Reinvention of Programming (pdf), you can see a ridiculously small TCP stack which keys off the RFC using multiple levels of rewrite; start at page 44, Appendix E.
A while ago, an old boss of mine was working on Kalman filters for quadcopters. Actually, a Kalman filter mixed with geometric algebra. Anyhow, he had TeXed the equations and tried writing something which would convert the TeX to OCaml and C. The OCaml for running on computers, the C for running on the quadcopter. It ended up being too hard to debug the parsing pipeline (time was short), so he had to scrap that project and do it the normal, error-prone way where you have to use human brains to maintain a mapping between representations. Having better tools for this would be very nice. One of the problems, he tells me, is that representing mathematical expressions is not actually the easiest of tasks. (I forget the details, but it would be cool to have you two meet sometime—he's in Pasadena.)
Anyhow, in terms of maintainability, being able to track what I'm calling "rewrites" is a Big Deal. Are you depending on a significant amount of LISP experience or a very specific psychological makeup in order to make this work? Where I'm really going with all this is that better tooling might help with this, and maybe such tooling already exists? We're starting to see this sort of thing whereby you can write TypeScript and the Chrome debugger will let you step through the Typescript even when it's actually interpreting Javascript (or executing bytecode).
My own interest here is in the ease of making DSLs (Domain-Specific Languages), as well as the more general idea of easily switching between representations (this is related to category theory—see e.g. Category Theory as a Unifying Database Formalism). In mathematics it can be extremely valuable to perform a "change of basis"; I think that generalizes to programming and beyond.