[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: various recent ll1 threads



Sundar Narasimhan wrote:

> I wish we were discussing how to make tomorrow's computer programs
> easier to write. The kind where the domains weren't just strings and
> numbers but perhaps real life entities like times, places and music. I
> wish programming languages would take a huge step up -- not endlessly
> hash old arguments about how to write 2+1, whether comments are
> meaningful and whether or not you can make more money w/ or w/out
> parens. In short, think not "little languages", but "really big"
> targets -- like secure, reliable programs that the rest of the
> non-computing world is clamoring for.

A fair plea, and one that I think is being satisfied more than you
imagine -- you shouldn't take the LL1 list as representative of what
the programming languages research community normally discusses.

Functional and declarative languages are seeing use in domains such as
robotics, vision, finance, etc (and, long before Haskell was a mite in
Wadler's eye, Common Lisp was already in many of those domains, and
persists there still).  Check out some of the papers in the Practical
Aspects of Declarative Languages symposia, for instance.

In my department, Pascal Van Hentenryck does some really neat work on
numerical optimization, yet he views his work as falling in
programming languages because it pushes declarative programming to the
extreme in a very recognizable way: specify the lay of the land, I'll
find you a solution.  So while he does the algorithmic heavy lifting
underneath, his users simply see a nice, somewhat restricted,
programming language.

I bring up this example to counter your conclusion:

>	  In short, think not "little languages", but "really big"
> targets -- like secure, reliable programs that the rest of the
> non-computing world is clamoring for.

Thinking in terms of little languages is critical.  I've long felt
that people who discuss little languages miss the most critical aspect
they bring to the table: what they subtract, rather than what they
add.  The most useful and interesting little languages are those which
don't try to be general-purpose languages, much less add-ons to
general-purpose languages (a popular Lisper view).

A good little language knows its negative space well up-front.  It's
not trying to solve combinatorial optimization problems *and also*
give you a better FOLD operator.  Indeed, it gets power precisely from 
the assumptions it makes.  If it erects a type system that guarantees
all programs in the language will terminate, then its optimizer can
in turn exploit that knowledge when substituting terms, say.

This seems to contradict another unassailable bit of wisdom, which is
that no little language stays little for too long.  Eventually, they
all inherit variables, loops, functions, higher-order functions (Paul
Prescod may disagree), objects, and so on.  But I don't think these
are irreconciable.  It only says we need to work harder at defining
the little/big boundary.

Awk, for instance, includes regular expressions, but it's Awk that has
the functions: not the regexps.  Olin Shivers taught me the incredibly
clever idea of writing implicitly backquoted macros.  In some
languages like PLT Scheme, you can now nicely combine modules that are
each written in different languages.  These are all approaches dancing
around a solution, but I haven't seen a solution that has all the
elements coming together: the little language has callouts, yet the
little language analyzer/optimizer can work across these callouts.
(The PLT Scheme solution, for instance, fails at this.  In some ways,
it resembles MIME more than Unicode.)  I can imagine several reasons
why this hasn't been done and, in the raw (ie, without better
interfaces), may be impossible.  Building those better interfaces is a
research challenge.  What's not clear is who's buying (try raising
funds to do this kind of research!).

Shriram