[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
AOSD'03 keynote talk, and DSF again
Here is a follow-up to the DSF discussion we had a few weeks ago. This
is a rather long post, and the connection to DSF (dynamically scoped
functions) will become clear only in the last paragraph.
In his AOSD 2003 keynote talk [1], Gregor Kiczales gives an account of
what he thinks is the main contribution of the AOSD community so far.
He also gives a perspective on the future of AOSD.
In both respects, I have strong doubts whether the claims he makes are
sound. Here is why.
Wrt to the future of AOSD he characterizes join point mechanisms as
re-registration mechanisms. He refers to the book "On the Origin of
Objects" by Brian Smith. See the notes for slide 36: "Brian talks
about a process of registration, that involves identifying objects out
of a fog of undifferentiated stuff". However, on slide 38 Gregor has
made a slight change to the wording and talks about a "fog of JPs".
However, this is a very strong change in meaning, because join points
are by definition not undifferentiated. Events in a computer, at least
in computers as we know them today, are always differentiated, no
matter how fine the granularity. Indeed, Mitchell Wand characterizes
join point models as "shared ontologies" [2], which seems to be closer
to what is proposed in Gregor's talk. On the other hand, Brian Smith
explicitly denies the possibility of reasonable ontologies of this
kind. See [3]. So I think these two views are not compatible. I believe
that the field of AOSD would be too narrow if we would require
researchers to believe in the possibility of ontological bases.
With respect to the contribution of the AOSD community - the issue that
is more important to me -, Gregor points out the space of join points
as the main discovery (see the notes for slide 28). A join point
mechanims is said to consist of a model for join points, a means of
identifying join points, and a means of semantic effect at join points
(slide 25).
An important comment wrt join point mechanisms is made in the notes for
slide 27: "we don't yet have a clear model of this space, but we know
that there is a space of JPMs here". I disagree: I don't think that
there is a genuine space of join point mechanisms, and it's not clear
that we will ever have a model of such a space apart from what we
already know in programming language research in general.
In order to back my claim, let's first assume for a moment that such a
genuine space of join point mechanisms really exists. What are the
requirements for a good characterization of that space? I see two: a)
we need to be able to describe all conceivable ways to identify join
points and b) identification of join points should be possible in a
relatively convenient way.
These requirements can be achieved in a systematical fashion, by
looking at what an AOP language must do to reach its goals: it must be
able to analyze and change the behavior of programs. The only data an
AOP language can get hold of to do this are the syntax trees for
concrete programs. Abstract syntax trees are recursively defined data
structures. The most straightforward way to traverse a recursively
defined data structure is via recursively defined
functions/methods/procedures. This ensures that any conceivable
analysis can in fact be carried out. Any approach that does not cover
all branches of such a data structure is necessarily less powerful.
This implies that a general-purpose join point mechanism must be
computationally complete.
The idea that a restricted pointcut language will do the job for
"most", or the "important" cases seems to be attractive. However
especially when one cares about ilities, the language with which to
identify join points should be a general one because otherwise one
risks that a base program needs to be refactored in order to fit the
constructs of the aspect language. (Such a refactoring might not be
feasible in all circumstances.) Even if the notion of a restricted join
point languages is pursued in order to make them more convenient, the
very space of join point languages will always consist of subsets of
general languages that are able to traverse arbitrary syntax trees, and
therefore the range of possible subsets is defined by the general ones.
Now, the important point is that we already have approaches for
general-purpose traversal (and subsequent modification) of program
representations. The ones that come to my mind are program
transformation frameworks (like JMangler), Lisp-style macros (that are
also available for other languages like Dylan and Java, for example
Java Syntactic Extender) and logic metaprogramming (like SOUL). This in
turn means that the "space of join point mechanisms" is decidedly not a
genuinely new contribution. So no, the AOSD community has not
discovered the space of join point mechanisms, it has only rediscovered
it.
However, it is still pretty obvious that AOP is not just macro
programming in disguise. In terms of Robert Filman's analysis, AOP is
quantification and obliviousness. Quantification is captured in a
general sense by traversals of syntax trees, as described above. What
AOP adds here are language constructs to make quantification more
convenient. This is like creating new iteration constructs on top of
the general notion of recursion, not more and not less.
On the other hand, obliviousness can be achieved by language constructs
that effect the behavior of a program beyond the scope of the concrete
part of the syntax tree being traversed and transformed. Especially in
the case of macros, the scope that is by default the only one being
influenced is the textual, i.e. lexical scope enclosed by a macro. The
only way known as yet to "break out" of the lexical scope and influence
other parts of a program are language constructs that provide
dynamically scoped definitions. In this sense, I stand by my claim that
dynamically scoped (re)definitions of functions are the essence of AOP:
they effectively model that ingredient of AOP that has not been
available before in other language paradigms. [4]
Pascal
[1] slides available at http://www.cs.ubc.ca/~gregor/
[2] see http://doi.acm.org/10.1145/944746.944732 - see also the slides
for his talk at http://www.ccs.neu.edu/home/wand/
[3] "The Foundations of Computing",
http://www.ageofsig.org/people/bcsmith/print/smith-foundtns.pdf
[4] At least, I have not found any examples of language constructs that
intentionally allow for dynamically scoped functions in the literature,
as described in my DSF paper, except in a very limited form as
exception handlers.