Use Common Lisp for SICP or is Scheme the only option

common-lisplispschemesicp

Also, even if I can use Common Lisp, should I? Is Scheme better?

Best Answer

You have several answers here, but none is really comprehensive (and I'm not talking about having enough details or being long enough). First of all, the bottom line: you should not use Common Lisp if you want to have a good experience with SICP.

If you don't know much Common Lisp, then just take it as that. (Obviously you can disregard this advice as anything else, some people only learn the hard way.)

If you already know Common Lisp, then you might pull it off, but at considerable effort, and at a considerable damage to your overall learning experience. There are some fundamental issues that separate Common Lisp and Scheme, which make trying to use the former with SICP a pretty bad idea. In fact, if you have the knowledge level to make it work, then you're likely above the level of SICP anyway. I'm not saying that it's not possible -- it is of course possible to implement the whole book in Common Lisp (for example, see Bendersky's pages) just as you can do so in C or Perl or whatever. It's just going to harder with languages that are further apart from Scheme. (For example, ML is likely to be easier to use than Common Lisp, even when its syntax is very different.)

Here are some of these major issues, in increasing order of importance. (I'm not saying that this list is exhaustive in any way, I'm sure that there are a whole bunch of additional issues that I'm omitting here.)

  1. NIL and related issues, and different names.

  2. Dynamic scope.

  3. Tail call optimization.

  4. Separate namespace for functions and values.

I'll expand now on each of these points:

The first point is the most technical. In Common Lisp, NIL is used both as the empty list and as the false value. In itself, this is not a big issue, and in fact the first edition of SICP had a similar assumption -- where the empty list and false were the same value. However, Common Lisp's NIL is still different: it is also a symbol. So, in Scheme you have a clear separation: something is either a list, or one of the primitive types of values -- but in Common Lisp, NIL is not only false and the empty list: it is also a symbol. In addition to this, you get a host of slightly different behavior -- for example, in Common Lisp the head and the tail (the car and cdr) of the empty list is itself the empty list, while in Scheme you'll get a runtime error if you try that. To top it off, you have different names and naming convention, for example -- predicates in Common Lisp end by convention with P (eg, listp) while predicates in Scheme end in a question mark (eg, list?); mutators in Common Lisp have no specific convention (some have an N prefix), while in Scheme they almost always have a suffix of !. Also, plain assignment in Common Lisp is usually setf and it can operate on combinations too (eg, (setf (car foo) 1)), while in Scheme it is set! and limited to setting bound variables only. (Note that Common Lisp has the limited version too, it's called setq. Almost nobody uses it though.)

The second point is a much deeper one, and possibly one that will lead to completely incomprehensible behavior of your code. The thing is that in Common Lisp, function arguments are lexically scoped, but variables that are declared with defvar are dynamically scoped. There is a whole range of solutions that rely on lexically scoped bindings -- and in Common Lisp they just won't work. Of course, the fact that Common Lisp has lexical scope means that you can get around this by being very careful about new bindings, and possibly using macros to get around the default dynamic scope -- but again, this requires a much more extensive knowledge than a typical newbie has. Things get even worse than that: if you declare a specific name with a defvar, then that name will be bound dynamically even if they're arguments to functions. This can lead to some extremely difficult to track bugs which manifest themselves in an extremely confusing way (you basically get the wrong value, and you'll have no clue why that happens). Experienced Common Lispers know about it (especially those that have been burnt by it), and will always follow the convention of using stars around dynamically scoped names (eg, *foo*). (And by the way, in Common Lisp jargon, these dynamically scoped variables are called just "special variables" -- which is another source of confusion for newbies.)

The third point was also discussed in some of the previous comments. In fact, Rainer had a pretty good summary of the different options that you have, but he didn't explain just how hard it can make things. The thing is that proper tail-call-optimization (TCO) is one of the fundamental concepts in Scheme. It is important enough that it is a language feature rather than merely an optimization. A typical loop in Scheme is expressed as a tail-calling function (for example, (define (loop) (loop))) and proper Scheme implementations are required to implement TCO which will guarantee that this is, in fact, an infinite loop rather than running for a short while until you blow up the stack space. This is all the essence of Rainer's first non solution, and the reason he labeled it as "BAD". His third option -- rewriting functional loops (expressed as recursive functions) as Common Lisp loops (dotimes, dolist, and the infamous loop) can work for a few simple cases, but at a very high cost: the fact that Scheme is a language that does proper TCO is not only fundamental to the language -- it is also one of the major themes in the book, so by doing so, you will have lost that point completely. In addition, there are some cases that you just cannot translate Scheme code into a Common Lisp loop construct -- for example, as you work your way through the book, you'll get to implement a meta-circular-interpreter which is an implementation of a mini-Scheme language. It takes a certain click to realize that this meta evaluator implements a language that is itself doing TCO if the language that you implement this evaluator in is itself doing TCO. (Note that I'm talking about the "simple" interpreters -- later in the book you implement this evaluator as something close to a register machine, where you kind of explicitly make it do TCO.) The bottom line to all of this, is that this evaluator -- when implemented in Common Lisp -- will result in a language that is itself not doing TCO. People who are familiar with all of this should not be surprised: after all, the "circularity" of the evaluator means that you're implementing a language with semantics that are very close to the host language -- so in this case you "inherit" the Common Lisp semantics rather than the Scheme TCO semantics. However, this means that your mini-evaluator is now crippled: it has no TCO, so it has no way of doing loops! To get loops in, you will need to implement new constructs in your interpreter, which will usually use the iteration constructs in Common Lisp. But now you're going further away from what's in the book, and you're investing considerable effort in approximately implementing the ideas in SICP to the different language. Note also that all of this is related to the previous point I raised: if you follow the book, then the language that you implement will be lexically scoped, taking it further away from the Common Lisp host language. So overall, you completely lose the "circular" property in what the book calls "meta circular evaluator". (Again, this is something that might not bother you, but it will damage the overall learning experience.) All in all, very few languages get close to Scheme in being able to implement the semantics of the language inside the language as a non-trivial (eg, not using eval) evaluator that easily. In fact, if you do go with a Common Lisp, then in my opinion, Rainer's second suggestion -- use a Common Lisp implementation that supports TCO -- is the best way to go. However, in Common Lisp this is fundamentally a compiler optimization: so you will likely need to (a) know about the knobs in the implementation that you need to turn to make TCO happen, (b) you will need to make sure that the Common Lisp implementation is actually doing proper TCO, and not just optimization of self calls (which is the much simpler case that is not nearly as important), (c) you would hope that the Common Lisp implementation that does TCO can do so without damaging debugging options (again, since this is considered an optimization in Common Lisp, then turning this knob on, might also be taken by the compiler as saying "I don't care much for debuggability").

Finally, my last point is not too hard to overcome, but it is conceptually the most important one. In Scheme, you have a uniform rule: identifiers have a value, which is determined lexically -- and that's it. It's a very simple language. In Common Lisp, in addition to the historical baggage of sometimes using dynamic scope and sometimes using lexical scope, you have symbols that have two different value -- there's the function value that is used whenever a variable appears at the head of an expression, and there is a different value that is used otherwise. For example, in (foo foo), each of the two instances of foo are interpreted differently -- the first is the function value of foo and the second is its variable value. Again, this is not hard to overcome -- there are a number of constructs that you need to know about to deal with all of this. For example, instead of writing (lambda (x) (x x)) you need to write (lambda (x) (funcall x x)), which makes the function that is being called appear in a variable position, therefore the same value will be used there; another example is (map car something) which you will need to translate to (map #'car something) (or more accurately, you will need to use mapcar which is Common Lisp's equivalent of the car function); yet another thing that you'll need to know is that let binds the value slot of the name, and labels binds the function slot (and has a very different syntax, just like defun and defvar.) But the conceptual result of all of this is that Common Lispers tend to use higher-order code much less than Schemers, and that goes all the way from the idioms that are common in each language, to what implementations will do with it. (For example, many Common Lisp compilers will never optimize this call: (funcall foo bar), while Scheme compilers will optimize (foo bar) like any function call expression, because there is no other way to call functions.)

Finally, I'll note that much of the above is very good flamewar material: throw any of these issues into a public Lisp or Scheme forum (in particular comp.lang.lisp and comp.lang.scheme), and you'll most likely see a long thread where people explain why their choice is far better than the other, or why some "so called feature" is actually an idiotic decision that was made by language designers that were clearly very drunk at the time, etc etc. But the thing is that these are just differences between the two languages, and eventually people can get their job done in either one. It just happens that if the job is "doing SICP" then Scheme will be much easier considering how it hits each of these issues from the Scheme perspective. If you want to learn Common Lisp, then going with a Common Lisp textbook will leave you much less frustrated.

Related Topic