[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Var-free programming (slightly off-topic)

Dan Weinreb wrote:

>    1. I should be able to write expressions without wrapping them.  That
>       is, no publicstaticvoidmainmumbojumbo.  This may be the single most
>       important property.  It distinguishes LLs from the Pascal mindset.
> There are really two things here that you're rejecting:
> -- Mandatory type declarations for methods/functions/whatever.
> -- Mandatory "programming-in-the-large" declarations.

I really wanted to keep types apart from everything else, because they
seem to be fairly contentious.  For this bullet, I really mean the
latter.  Arithmetic should be as simple as writing just arithmetic:

  1 + 2
  (+ 1 2)
  1 + 2;
  print 1 + 2

should all be complete programs.  I like your choice of phrase: no
"programming-in-the-large" declarations.

>    2. I should not have to declare types most of the time.  This is NOT
>       the same as saying the language does not enforce type boundaries
>       statically: in this sense, ML counts.

This explicitly rejects mandatory type declarations.  I make this
distinction because an ML person might disagree with my #2 but will
definitely agree with my #1.

>    3. I should be allowed to submit, but not run, buggy programs.  That
>       is, for rapid prototyping, I should be able to write a program with 
>       3 functions, A, B, and C; C can be buggy, but so long as I use A
>       and B only, I shouldn't be prevented from running the code.  ML
>       fails at this.  The ML-like type systems for Scheme succeed.
> OK.  I'll just point out that this probably isn't properly considered
> part of the definition of the langauge.  It seems to me that it's part
> of the programming environment.

No, I think it is part of the language in many cases.  You're right,
it is a matter of granularity, but I just need to make the granularity 
small enough to show up any lanuage.  If I have one type-buggy method, 
Java won't let me compile.

> This may need to be divided into a "language definition" part and a
> "language implementation" part.  

I think only inasmuch as the language spec does or doesn't say "all
implementations must ...".  RnRS Scheme, for instance, is broken (imo)
in this regard.  It defines error conditions, but doesn't require an
implementation to catch them.  Seg faults are cool.  ["RnRS" stands
for the Required^n Report on the Algorithmic Language Scheme, which is 
the de facto standard for the core of Scheme.]

>				   In C, could you build a C
> implementation that always detects array-out-of-bound errors at
> runtime?  

No!  Address arithmetic inhibits this.  I could do it *conservatively* 
(every time the program executes an &, I reject in anticipation of
future violations that may never come), but not precisely.

>    5. I'll cross the line from language to implementation (hit me hard on 
>       the head, Simon!): it needs to provide a rapid execution
>       environment.  
> Actually now I'm not sure whether you're talking about whether you
> are referring to the time taken by the program to run, or the
> time between the last edit and the beginning of program execution.
> Anyway, yes, this is implementation for sure.

I'm referring strictly to the latter: the time it takes for the
program to begin execution.  Scsh [Olin's great Scheme shell] takes 30
seconds to start up.  Not an LL.

>    Hence, I'll propose that weight is inversely proportional to the
>    number of syntactically legal expressions that execute correctly.  In
>    that respect, Arc is like Lisp on a diet.
> Gee, that's not what I expected you to say.  If it's bad for just
> about every syntactically legal program to have some meaning, then it
> would seem to be good to add redundancy that give you extra checking,
> such as mandatory typed variables.  (Unless I'm misunderstanding the
> gist of your point...)

[I'm using "execute correctly" to mean "provide safe executions": they 
 don't violate the type contracts on any primitive operations.  If +
 only takes numbers, and you feed it a non-number, it halts with an
 error.  It doesn't, of course, guarantee that I didn't accidentally
 add a metric number to a British unit, unless that's part of the
 language spec ...]

You have my gist exactly right.  I think the place where LLs win for
small programs is exactly where they fail for large programs.  For
small programs, the greater the overloading (what Graham calls
"polymorphism" in his Arc slides), the more concise the code.
Concision is clearly a win for quick-and-dirty jobs.  But to achieve
this, we must conflate unrelated concepts (the very definition of
overloading).  And that conflation invariably comes back to bite you.

This certainly implies that "it would seem to be good to add
redundancy that give you extra checking, such as mandatory typed
variables".  But we have to be very careful about how we interpret
this phrase.

- Does it mean explicit type declarations?  Certainly not -- ML proves 
  that they are virtually never necessary.

- Does it mean partitioned types (each value belongs to exactly one
  type domain)?  Not really -- you can design a type system with gray
  areas around this.

- Does it imply a loss of genericity?  Certainly not -- again, see ML.

ML really gets a lot right.  Its problem, I think, is that, like OO,
it doesn't scale *down*.  So we have LLs to occupy that niche.  The
challenge is to design languages that do not *prevent* them from
scaling up.  Gratuitous overloading, conflation of types, and so on
are all scaling inhibitors.  Ultimately, I don't believe one language
will scale all the way from quick-and-dirty scripting to large,
multi-coder programming (even with pretty smart multiple coders).

I do, however, think there's room for a *family* of related languages.
We've been pursuing this idea in the context of PLT Scheme, which
permits each module to be written in a different (possibly
domain-specific) language that can both extend and even restrict full
Scheme.  At the high end, I want a language with enough restrictions
to permit full ML-style type inference.  At the low end, I want Arc.
And the current PLT Scheme would sit in the middle.  I can then
implement a script in something like Arc, but import it into a large
program by making a module of it -- and sticking useful checks and
balances at its interface.