[Prev][Next][Index][Thread]
[Review] Object-Oriented Programming: A Unified Foundation
Title: Object-Oriented Programming: A Unified Foundation
Author: Gieuseppe Castagna
Publisher: Birkh\"{a}user, 1997
ISBN: 3-7643-3905-5
Summary:
This book is an investigation into the type-theoretic underpinnings of
OO languages. What sets it apart from the rest of the herd is the
attention paid to the multiple-dispatch style of OO found in Dylan
(and CLOS and Cecil, of course).
Beyond the utility of learning a formal model of generic functions, of
particular interest to Dylan programmers -- especially compiler writers
and language hackers -- are the chapters investigating how to
integrate second-order parametric polymorphism (a la ML) with the
overloading polymorphism more commonly found in OO languages.
While the book is quite formal and technical, it does not require a
deep background in the lambda calculus, type theory, or any of the
other tools beloved of language theorists. The book is entirely self
contained, and chapter 1 contains all the background mathematics
needed to follow most of the arguments in the book. I wasn't terribly
familiar with either when I started the book, and I found the
presentation quite readable. Anyone who has done a little ML or
Haskell programming, and who is comfortable with mathematics, should
be able to read this book.
Detailed Review:
A very large body of research into the type-theoretic underpinnings of
OO languages has accumulated over the past 15 years or so. However,
the bulk of this research is focused on singly-dispatching languages
like Smalltalk and Java. There has been comparatively little attention
paid to the generic-function style popularized by CLOS. (I understand
that there is a substantial body of literature on database theory that
is relevant to multiple dispatch, but I haven't read any of those
papers, The papers I have read stuff mostly descends from the 1984
paper by Cardelli and Wegner, "On Understanding Types, Data
Abstraction and Polymorphism".)
Gieuseppe Castagna has written a book (apparently an expanded version
of his PhD thesis) to rectify this omission. He was extraordinarily
successful -- I found the ideas in his book to be a very valuable
addition to my mental arsenal of tools for thinking about programmers.
He did a good enough job that I feel confident of my ability to fend
off wild Haskell hackers armed with monads and lazy evaluation. :)
His basic approach is to construct a series of variants of the typed
lambda calculus that formalize the ideas he is examining, and then he
proves various theorems about them. The basic lambda calculus he uses
is the simply-typed lambda calculus, augmented with subtyping and
overloaded functions. These overloaded functions select a branch based
on the type of their arguments, which is the operation that simulates
multimethod dispatch.
In addition, he takes the time to construct a toy OO language called
KOOL, and shows how the ideas in each calculus might appear as
language features in a real language. I found this was extremely
helpful, and I suspect that it would be very helpful for almost all
readers, since the amount of time spent actually programming is
probably larger than the amount of time writing down proofs for even
the most theoretically-inclined.
Naturally, this approach shares many of the strengths and limitations
that that characterize the type theories underlying the strongly-typed
FPLs. The primary limitation is that the sorts of type systems that
can be examined are strongly constrained by the requirement that the
type system be statically decidable. This means that Castagna's book
will not shed any light on the deeper issues surrounding CLOS's
metaobject protocol, for example. This is not true of Dylan, however,
but I will defer going into that point until the end of the review.
The book breaks into roughly three major pieces. First is the
introductory section, where the idea of multimethods, the description
of KOOL, and the basic lambda-& calculus are all described. In
addition to the setup, some explanation is how this affects the
semantics of practical OO languages, and some simple results (like how
to handle contravariance-versus-covariance).
The middle third is easily the most mathematically demanding, because
here he proves all those nice properties like confluence, strong
normalization, subject reduction, or at least finds subsystems of the
lambda-& calculus these hold for. It's obviously essential for
understanding on a deeper level, but it won't be digested in a single
reading. I'm working on a second reading of this right now, and
wearing out a pencil following the proofs.
The final third is the most speculative, because here he starts to use
the results of the prior third to start integrating second-order
parametric polymorphism with second-order overloading. This was
*fascinating* stuff, because the mix of the two types of overloading
can offers some the possibility of building some truly beautiful
abstractions. There's a lot of "further works needs to be done" here,
but there's a ton of interesting and useful ideas too.
In my reading, I found that there were three main points where his
exposition really shone. The choice is somewhat arbitrary; much of the
middle section was rough enough going that I probably don't understand
the results well enough to appreciate them. :) I may change my mind
upon rereading.
First, was the idea of runtime type: this section offered a clear
explanation of why its meaningful to talk about runtime type even in
strongly-typed OO languages, and why this means that lazy evaluation
and object-oriented programming are at best an awkward mix. Thanks to
subtyping, the statically inferred type of an expression in an OO
language is going to be at best a supertype of the type of the fully
reduced expression. Unlike in a language like ML, the choice of code
body can depend on the true type, since the method body an overloaded
function chooses can depend on the true type of the expression. This
means that to be certain of selecting the right method in the general
case, all the arguments need to be evaluated before the method
selection is performed.
Second, he clearly explained the contravariance-versus-covariance
debate, and how multimethods can be used to solve that problem. I got
the feeling that this was a "folk theorem" that is fairly well known,
at least among programmers experienced with multimethods. I had not
seen it before, however, and I found Castagna's explanation very
good. Essentially, the solution to get typesafe behavior in all cases
is to use the regular contravariant specialization rule whenever the
generic function is being substituted for another (for example, as an
order relation to a sort function), but to select methods from the
generic function covariantly. Happily, this is what makes the most
sense intuitively as well.
It also suggests a typesafe rule for adding type declarations to
keyword arguments in generic functions. The GF should be allowed to
declare the /minimal/ type of a keyword argument, and all the
contained methods should only be allowed to declare supertypes of this
minimal type for keyword arguments. It also suggests that adding a
<bottom> or <nil> type to complement <object> is probably a good idea
if the strong typing thing in Dylan is going to be carried much
further.
The final third was for me the most interesting one. Castagna takes a
stab at mixing together second-order parametric and overloading
polymorphism, and this has a lot of features that really strongly
suggestive for Dylan. This is partly because <class> in Dylan is
sealed, which rules out the really weird alterations of the object
protocol, and effectively limits overloading to at most quantification
over primitive types (ie classes), I think.
It seems to me that there's room to build a general mechanism that
enables applications like Eiffel's generic classes, but are
customizable through user- specialized higher-order functions on types
(like make and initialize are today). This is just an intuition,
though -- I have no idea what it would look like, or even if it is
possible. The theory Castagna has built is not powerful enough to
accomodate this. He has written another paper (with Gang Chen) that
follows up on this -- "Dependent Types with Subtyping and Late-bound
Overloading" -- which is available from his webpage. I haven't
finished reading it yet, though. More later.
Obviously Dylan will never be fully typesafe in the ML or Haskell
sense (eg, it's possible to write the Y combinator in Dylan), and
that's ok, since they pay a rather steep price for that safety. It
would be keen, however, if as many of their slick ideas could be
stolen as possible.
Neel
Follow-Ups: