[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: cheerful static typing (was: Any and Every... (was: Eval))



   Date: Thu, 10 Jan 2002 12:59:09 -0500 (EST)
   From: Guy Steele - Sun Microsystems Labs <gls@labean.East.Sun.COM>

   Careful, here, Dan; taking this to a logical extreme
   implies that, if you beef up the contracts, you can
   define away all errors.

I really don't understand what you mean by this.

Perhaps you're saying that you could define away all errors by simply
making the contract be the same as the code.  Well, yes.  But if the
contract is the same as the code, then who is to say that the code has
a "bug"?

To put it another way, I don't see what "correct" can mean other than
that the code does, in fact, provide an implementation of a
specification of what the code is supposed to do.  If nobody ever lays
out the answer to the question "what is this code *supposed* to do", I
don't see how one can objectively say that the implementation is
correct or not correct.

   Maybe we could say that a run-time error is a situation
   in which a program exhibits one of a set of behaviors
   that we choose to describe as "undesirable"?  Often
   throwing one of a certain set of exceptions is considered
   undesirable; indeed, the division-by-zero exception may
   be undesirable in some contexts despite the fact that
   it's in the contract of the "/" operator.

I don't see how to firm up the "in some contexts" in any way better
than the concept of having contracts, or specifications, at various
layers of abstraction.  For example, if you define a boolean function
"prime" whose contract is that it takes a positive integer argument
and returns a boolean that's true if the argument is prime, and
someone writes an implementation in which prime(12345) throws
zero-divide, then that is an error entirely by virtue of the fact that
the specification for "prime" says that that's not what "prime" is
defined to do.

The "in some contexts" seems to mean "it depends on the caller", but
specifically it depends on the fact that the caller's contract didn't
specify that throwing zero-divide was part of the defined behavior.

Note that the *implementation* of "prime" might intentionally
sometimes do divides by zero and catch the exception and take
appropriate action.  That's OK since the caller of "prime" never sees
the exception.  An exception does take place, but it's not an error
because neither the caller ("prime") nor the callee (integer divide)
violates its contract.

Here's another example that lets me make the point a bit more clearly.
Suppose we have a function whose job is to look up a name in a table
associating names with values, and return the corresponding value.  We
decide to implement it as a linear search, and to make the search
faster, we don't put in an end-check but rather rely on the checking
done by the array-reference operator, which throws out-of-bounds if
you go out of bounds.  When we write the contract of the function, we
have to say what it's defined to do when given a name that's not in
the table.  We could (1) define it to throw out-of-bounds, (2) define
it to throw a new exception called name-not-found, or (3) define it to
return a special value, e.g. null.  In case (2), the implementation
catches out-of-bounds and rethrows; in case (3) the catch clause
returns the special value instead of rethrowing; in case (1) there's
no try/catch and the exception "propagates".  (I've been trying to be
language-neutral here but obviously I'm making some assumptions that
the language is basically like Common Lisp or Java when it comes to
exceptions.)  So in case (1), if you call the function on a hitherto
unknown name, and it throws out-of-bounds, that is not an error,
but in cases (2) or (3) if it throws out-of-bounds, that is an error.
So what you're calling the "context" is different in case (1) than in
cases (2) and (3), but it's different precisely because of the way
that the contracts of the cases differ from each other.