[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Continuations in Ruby v. Scheme
I've been reading Ruby too. The details seem a bit more complicated.
It looks like a block, which looks like a closure to me, can only appear at
the end of a method call, and the block gets invoked by a yield statement
inside the method, which is how iteration is performed. So, maybe, this is
just turning a generator into an iterator.
To make what seems like a full closure, that can be passed as an argument,
you make a Proc out of the block: Proc(block). I presume this makes the
arguments of the block lexical, though i'm not sure what the real
difference is.
Q1: Is this the right model?
Q2: Where did this model come from? The book, suggests it comes from CLU.
Q3: I think the Lisp Machine had an &args parameter that would let you
allocate additional fields in a function call frame. Is that the same
basic idea?
Q4: The book also said it was an optimization for loops, ie the variable
is allocated in the surrounding contour, i guess.
Q5: I think, Guy Steele felt something like it seemed easier to "fall
into" (not the right words) implementing dynamic scoping than lexical scoping.
The Ruby approach seems like a strange variant where you can be dynamic in
a narrow lexical context. Is that the intent, or is it just another
approximate hack to get closures into a language?
I've been interested in Gregory Chaitin's work on programming Turing
machines in Lisp, to understand computational complexity. Of the two
interpreters i know he's written, one in C, and one in Java, they are both
dynamically scoped. When i've mentioned that they should be lexically
scoped, he shrugged me off. When i looked at his interpreter it is very
simple. So while the difference in implementation might not be much, maybe
people are more easily drawn to dynamic scoping than lexical scoping. I
gues you could also simulate lexical scoping by variable renaming too.
k