[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


   Date: Thu, 29 Nov 2001 13:49:51 -0500
   From: Don Blaheta <dpb@cs.brown.edu>

   These blocks definitely seem to be closures, by the way---one of the
   slides online shows an example that either demonstrates closure or
   dynamic scope, so I'm going to assume the former. :)

Hey, of all the cool languages being discussed here, do any provide
"full coroutine power" when it comes to iterators (and "generators" in
general)?  I guess I have to explain that....

Normally with iterators you have two guys trying to do their thing.
One is the code that wants to iterate and do something for every X.
The other is the guy who is doing the iterating and finding the next
X.  In most such arragements, the stack ain't big enough for the both
of them, and so only one of them gets to maintain "PC" and "stack"
state when control gets handed off to the other one.

When you use something like a Java Iterator object, the iterating code
gets to take advantage of the stack, whereas the iterator is forced to
"return" the "next X" value whenever it is invoked.  If the iterator
wants to maintain state between calls (as it always does) it must keep
such state in fields (instance variables) (or something like that,
depending on the language).

When you use something like a Lisp "mapping" function, it works the
other way around (that's American for "other way round"): the mapping
function gets to use the stack, and the iterating code is a function
that must "return" to the iterator whenever it wants to work on the
"next X".  If the iterating code wants to store local state, it has to
stow it in fields (instance variables, lexical variables if this is a
Scheme-like upward funarg, or something else, depending on the

The latter style is nice if your iterator is iterating over a tree
structure and you want to write a simple depth-first recursive tree
walker; it's usually a pain in the neck to do that in the former
style.  The former style is used very often when the iterator is not
actually an iterator but a generator such as an input stream: you read
from the input stream, then depending on what you saw you call this
function or that function, passing the input stream as an argument,
and the function (method, whatever) can similarly invoke the input

>From time to time you find yourself in a situation where both parties
really want stack state.  In many lanauges/systems, the only way to
get this is to use something that is really too powerful, and often
too expensive, namely some kind of "thread" or "process" facility.
That's often the only way to get two stacks.  But threads and
processes are trying to solve a much more general problem, involving
asynchronicity and concurrency, whereas all you needed was purely
one-thread, purely synchronous co-routines.

I know that there are programming paradigms in Scheme where you can
use closures and continuationg in all kinds of neat and novel ways,
but I get the impression that a lot of these are heavily based on the
use of tail-calls, where there really isn't any stack state.  I'm
looking for solutions where both sides really have real recursion,
where real "return addresses" (so to speak) have to be maintained,

There was actually a facility for this in Lisp Machine Lisp, but it
wasn't carried over into Common Lisp, and it was sort of more
expensive than you really wanted even though it was cheaper than
threads (threads were built on it, actually), and it didn't interact
all that well with the debugger, and the problem I'm describing really
doesn't come up that much, so it didn't get a lot of use.  Still,
even though the problem isn't very important, I've always been
interested in it for some reason, and I was wondering whether there
was any new progress in the area.

-- Dan