[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: call/cc

The Other Steele writes:
> For a really good test that will provide better separation
> of the languages, try curried addition of first one parameter,
> then the other, plus 1:
> (((lambda (x) (lambda (y) (+ x y 1))) 3) 5)     Scheme, 24 tokens

(\x -> (\y -> x + y + 1)) 3 5 # Haskell, 17 tokens

(All the savings is from removing Scheme's parens :-)

I'd also like to see how these languages bind the function to a variable
and call that:

let f = \x y -> x + y + 1 in f 3 5 # Haskell, 16 tokens

But these are special cases of looking at how good a fit there is between
functional concepts such as greek-letter conversions, and (in the case of
Scheme and Haskell), functional programming languages.  Generalizing:

A full catalogue might put concepts from various programming paradigms along
one dimension, and languages along the other.  Functional languages, such as
Haskell, are not surprisingly more succint when expressing functional
notions; an "OO" language such as Java is more natural for polymorphism.
But it's interesting to see how well a language can express paradigms
outside its cell: how well Haskell expresses imperatives or OO as compared
to how well Scheme expresses OO (for some value of OO) or static type
information.  There's several measures possible for each "off-diagonal"
cell: how well the concept can be expressed as a one-off (for example,
representing an object as a closure over its methods in Scheme), how well it
can be expressed using a library, and how hard it is to write the library.
(For example, template wizardry can make C++ more functional --- the library
is incredibly difficult to write, but using it fits into C++ pretty well ---
whereas Java can't be "lifted" to an expressive functional language without
a compiler front-end.)  This kind of table --- and no, I'm not volunteering
to collect it --- would give some idea of how well the language can grow in
different directions.