[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: the forward method [dynamic vs. static typing]

    > From: Pascal Costanza <costanza@iai.uni-bonn.de>

    > Ken Shan wrote:

    > > Static typing can make programs more concise: I can write

    > >     getLine >>= print . (/2) . read

    > > or

    > >     getLine >>= print . not . read

    > > rather than

    > >     getLine >>= print . (/2) . readDouble

    > > or

    > >     getLine >>= print . not . readBool

    > Nice and convincing example. It seems to me that you can only get this 
    > effect in a DT language by implementing your own type inferencer.

Yes it is, but:

One way I put my original question was "Why isn't Hindley-Milner just
a compiler option?" and to make that a little more precise: I naively
meant "Why isn't it an error-checking/optimization option?"

And, yes, Ken's example answers that question in a convincing way:
"Because the results computed by type-checking are fed back into the
translation of the program as more than just optimizations -- in
particular, they enable a kind of genericity (`static dispatch on
return type') which could not be performed at run-time by looking at
the run-time types of the arguments to a generic function.  Type
checking has become part of how the _meaning_ of the program is
expressed -- not merely how it is optimized or partially verified."

As a result of that feedback between static analysis and the process
of assigning a meaning to source text, I can write programs that 
capture abstractions like:

         print . f . read

and I can't do that in a DT language unless, indeed, I write a type
inferencer through which my abstraction is processed before (logically
before) execution.

But that doesn't dispell the intuition I was trying to express (in
order to try to understand if it's a bogus intuition or pointing at
something real).  Trying to refine that intuition in terms of this new
information (thanks Ken and the various people who set me straight on
what he was saying)...

My intuition persists for two reaons:

One reason is simply that sufficient type information provided by a
programmer to enable "return type dispatching" is in general
less-than-or-equal to the type information the programmer would need
to provide to enable a complete static type-check (is it not?  It's
certainly not _more_ and I think it's pretty obvious it will sometimes
be less).  There is something less-strict than ST (and closer to DT)
which is sufficient for the trick Ken illustrated.

So, ST is "more than necessary" for that concise expression of certain

The other reason is that I think ST is "less than is sufficient" for
similarly concise expression of certain abstractions:

It seems to me that there are an unbounded number of kinds of static
analysis that might be performed on a source text to produce results

		1) can identify programmer errors
	        2) can facilitiate optimized compilation
        (*new*) 3) can disambiguate the meaning of the program

Current ST systems provide one example of one such analysis, but are
not computationally general, even for guaranteed-terminating
examinations of the source text.  For example, one might imagine a
source text in which the implementation meant by `read' depends on
whether the program being compiled will (according to some
well-specified flow-analysis algorithm) (a) certainly later apply
`privileged_write', (b) certainly will not later apply
`privileged_write', (c) might or might not later apply
`privileged_write'.  Unless that flow-analysis algorithm can be
encoded in some ST type-checking system, this new kind of dispatching
isn't provided by that ST language.

My original intuition could also be described by saying that there is
some "minimum of details" language in which any additional information
by a programmer is at most used for optimization and partial
verification -- and that DT languages, lisps specifically, are closer
to that than anything else.

Ken's example seems to me to not do more than underscore that: (a)
such a minimal language must include some provision for extensible,
phased interpretation of source texts; (b) current approaches to ST
are a nice example of a phase-extension that you'll want in some

So with that improved understanding of what the question means, I have
the same (syntactically anyway) question: "Why isn't Hindley-Milner
just a compiler option?"