[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Lightness vs. Largeness



S. Alexander Jacobson wrote to me on Thu [6 December 2001]:
| On Thu, 6 Dec 2001, Dan Weinreb wrote:
| > Me too, but my current opinion is that the next time I write a
| > 100,000-line application, I'm going to use type declarations
| > everywhere I can, if the langauge makes them available (whether
| > mandatory or optional).  I've really changed my mind on this over the
| > years.  Partly it's because type declarations sometimes catch
| > programming mistakes statically, and partly it's because they make
| > programs easier to read.
| 
| Wouldn't you prefer to use a language that infers types.
| Then you could use some documentation tool to generate a view of the code
| that has type information.

One thing I would worry about in a large program would be that when I
write a function, the inferred type may be too general.  It's the
right type for the *current* program but the type I want to write is
the type over an expected set of *future* programs.  So in my head I'm
writing a function that works on Cats but the type inferencer sees
that I only use Mammal operations, so it assigns Mammal as the type of
the function's argument.  Then at some point someone using my function
passes in a Dog, which is a Mammal, so the system accepts it.  And
then later on I change my function to actually use the Catness of the
argument, and the system breaks.

You could argue that that someone should have read the docs about my
function wanting Cats and should not be passing Dogs in.  But isn't
that what the type system should be checking for me?

	- Amit

-- 
Amit J Patel, Computer Science Department, Stanford University 
        http://www-cs-students.stanford.edu/~amitp/
	``Parkinson's Other Law:  Perfection is achieved only
				  at the point of collapse.''