[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: T



   Date: Tue, 11 Dec 2001 17:49:02 -0400
   From: shivers@cc.gatech.edu

   I was involved in the T project. I can give you its history, in a rambling,
   first-person way.

Wow, great story.  I only knew the first 1/3 of it.  Thanks.

			   it was a widely held opinion at the time that "lexical
   scope is interesting, *theoretically*, but it's inefficient to implement;
   dynamic scope is the fast choice." I'm not kidding. 

Having been there then, I can assure you that what they were thinking
was as follows.  In dynamic scoping, you can implement "get the value
of symbol A" by "read the contents of A's 'value cell'".  But in
lexical scoping, you have to implement it as "do a linear search up
the A-list, e.g. using the assoc function".  This is all based on the
general body of knowledge at the time of how to do a Lisp interpreter.

When you say "There was immense experience in the lisp community on
optimising compiled implementations of dynamically-scoped languages" I
think you are giving them too much credit, at least when interpreted
by today's standards...

   MIT responded to the Vax by kicking off the NIL project. NIL stood for "New
   Implementation of Lisp." Jonathan was part of this project during his year
   away from Yale. It was a really, really good effort, but in the end, was
   crippled by premature optimisation -- it was very large, very aggressive, very
   complex. Example: they were allocating people to write carefully hand-tuned
   assembly code for the bignum package before the general compiler was written.

But you have to take into account who their user community was.  The
Macsyma people were a very large constituency.  It was clear that
high-speed bignums were way up there on the priority list -- NIL with
slow bignums would be deemed a failure.

   The NIL project was carried out by top people (err... I recall Jonl White &
   George Carrette being principals). 

And Glenn Burke.

				      But it never got delivered. It was finished
   years later than projected, 

It would have helped them a lot if a lot of the people knowledgable
about Lisp implementation had not been off working on Lisp machines
instead of being part of the NIL team.  They had very extensive
requirements, including "do everything that Maclisp does at least as
well as Maclisp does it", which was a tall order.  The "quick and
dirty prototype" path that Jonathan took with T was not considered
suitable.  (At least that's how I remember it, not that I was really
involved directly with NIL...)

   Larger, deeper things: they designed a beautiful object system that was
   integrated into the assignment machinery -- just as Common Lisp's SETF lets
   you assign using accessors, e.g., in Common Lisp

By the way, while we're talking history, let me mention that it took
an *amazingly* long time for SETF to be invented.  In retrospect, it's
seems so simple and obvious.  But the MIT Lisp community went through
many generations of structure-macro-packages before SETF came along,
at which point it was instantly obvious to everybody that SETF was the
answer.  (I'm not sure who invented it, but I'm pretty sure it was
either Dave Moon or Alan Bawden.)

And nested backquotes flummoxed all of us for quite some time.  Moon
and I and Bawden had noticed that nesting backquotes just didn't work,
but we didn't understand why.  Finally Bawden groked it in all its
fullness and discovered that actually it *did* work; our problem was
that we had not yet discovered the ",'," construct.

So I hope this is encouraging for newcomers: although other (very
smart) people have tramped all over the garden, it there may yet be
still-undiscovered gems hiding under that next stone.