[Prev][Next][Index][Thread]
Re: Closures
> > Why is it *rebinding* in enclosing scopes and why would a special
> > syntax be needed? (I don't remember very much about Python, I'm
> > afraid.)
>
> Because Python is quite heretic in the way you introduce locals:
>
> def f(x): # x is an arg => local
> print y # y is global or free: no assigment/binding to it
> z = 2 # z is local: occur in binding/assignment statement
> print z # print our local z
What if I have
def f(x):
print z # is z local or global?
z = 2
print z # presumably z is local here at least
> if we put def f(...) in:
> def g():
> z = 3
> def f(x):
> ....
> then should z = 2 in f introduce a new local variable or rebind the
> z in in g?
Ordinarily, I'd expect it to be the same as if you wrote
z = 3
...
z = 2
both in g. That is, both assignments would change the value of
the same variable.
> Returning to your question:
>
> > The interesting question is why: why have Dylan and modern Lisps
> > gone this way when virtually no one else has? Why were Lisp and Dylan
> > not satisfied with some kind of "approximate closure" when so many
> > others were? Java came close but didn't take the final step of supporting
> > assignment.
> >
> About Java I think it's a matter of performance over their somehow already
> defined execution model (the JVM),
The problem with that answer is that inner classes were defined (or at
least explained) by code rewriting, and "shared binding" assignment
can be implemented with some additional, reasonably straightforward,
rewriting - in such a way that the "extra" run-time performance
penalty when when you don't actually assign to the variable is zero.
(It's essentially just the rewriting you have for Python but doing it
only for the variables that need it.)
Moreover, they didn't invent some "weird" different semantics,
just said the variables had to be "final".
So I think there was probably a substantial "don't care" about it:
supporting "shared binging" assignment wasn't seen as important.
> I think on the tradition of C and C++ Java is
> a bit scared about not explicit performance costs (Lisp absolutely not [1])
Though I don't know exactly what in Steele and Gabriel's history you
have in mind, I don't think that can quite be true.
> Maybe I could be wrong but if I rembember well the most modern form
> of closures came to Lisp (and Dylan) from Scheme dialect: at least one
> of the author (Guy L. Steele) explicitly states that inspiration came
> from Algol and from the fact that they were trying to implement an
> object-oriented model (actors) in a functional language
I think Lisp people did rather like the way closures could be used
as objects; but I think there was also (for various reasons) the view
that the shared-assignment semantics was simply the right semantics
in a language that had assignment and nested procedure definitions.
> >From this point of view it's also more clear that the space of possibilities
> (what is needed vs. nice, what is not )is a bit larger when you already have
> both objects and first-class functions.
But consider Dylan. It was always going to have both objects and
first-class functions. Yet I doubt the Dylan designers ever thought
"since we have objects, we should have some other semantics for
assignment in our first-class functions."
Or - going in another direction - consider T (and OakLisp) where they
try to stick close to the model of first-class functions as objects.
If a language is going to have both first-class functions and objects,
why not try to unify them?
-- Jeff