[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Icon



On Fri, Dec 21, 2001 at 06:59:23PM -0500, Guy Steele - Sun Microsystems Labs wrote:
>    On Thu, Dec 20, 2001 at 06:09:52PM -0500, Guy Steele wrote:
>    > See?  it's easy!  :-)
>    
>    It's even easier in Haskell:
> 
> Yes, this is very nice!  One way to look at a Monad is that
> it provides a way to map whatever data structure is being
> used as a "result value" into some other kind of data structure
> (plus providing a useful algebra on such data structures).
> Of course, the "data structure" might be a function.
> 
>    \begin{code}
>    module M where
>    
>    -- use some standard monad combinators
>    import Monad
>    


>    -- define the monad type, which is precisely of the form Guy described.
>    -- Here r is a type parameter for the ultimate return value, and a is
>    -- the type of value that this M will generate.
>    newtype M r a = M { runM :: (a -> r -> r) -> r -> r }
> 
> I think you are able to identify "failure continuation"
> with "result value" because Haskell is lazy.  In an eager
> language, wouldn't the signature be more like
> 
> 	runM :: (a -> r -> r) -> (() -> r) -> r
> 
> ?

Probably more like (a -> (() -> r) -> r) -> (() -> r) -> r.

But basically, yes. Actually, there are two issues here. One is that Haskell
is purely functional and therefore only the result value of the continuation
matters, as it cannot cause any side effects (unless you count
non-termination as a side effect). You can do purely functional monadic
programming even with Scheme or ML.

The other issue is that Haskell is non-strict, meaning that even an error or
an infinite computation can be passed as an argument without propagating the
error further, provided that the argument is not _used_. Laziness (or
call-by-need) is just a technique for implementing non-strictness.
Call-by-name (ie. laziness without memoization) would have similar semantics
but horrendous performance.

But I don't think that non-strictness is actually essential anywhere in my
example. So it should translate to other languages as well.

In the more general version of the backtracking monad, as presented in
Hinze's paper, the data type is actually a monad transformer, parameterized
by another monadic type that defines the type of computation of the return
value. I left that out since it was immaterial for the current discussion.

>        -- while combining two Ms means giving one's return value as the other's 
>        -- failure argument (. means function composition)
> 
> More precisely (or, in an eager language), the failure continuation of one
> is a computation () -> r that will evaluate the other?
> 
>        M m `mplus` M m' = M (\sc -> m sc . m' sc)
> 
> And the wonderful thing about the monad formulation is that,
> despite the fact that it is not obvious at first (and I half
> thought there was an error here), mzero is indeed a left identity
> as well as a right identity for mplus!

That was a compact definition (verbatim from Hinze), but not very intuitive.
The eta-expanded version might be easier:

M m `mplus` M m' = M (\sc fc -> m sc (m' sc fc))

This ought to make it clearer: "apply m to the success continuation sc, and
if m fails, use the result value gained by applying m' to sc, and if _that_
fails, use the original failure value fc".

>    Note that there is no need for any syntax transformations, this is pure
>    Haskell. You can get surprisingly far without macros when just about
>    everything is a first-class value...
> 
> A similar thing might be achieved in Lisp by decreeing that
> every function must return a pair of a value and a failure
> continuation.  But enforcing this decree would be difficult.
> Haskell's polymorphic inferred type system is at once dictatorial
> and unobtrusive.

In addition, each function would still have to take a failure continuation
as a parameter, so it will have something to return, right?

But the real problem is more in combining values of this sort, eg. applying
one to another. This needs some tricky combinators and even in Haskell there
is a special "do"-syntax for monadic code. It's just as Matthias Felleisen
noted: new binding constructs still require syntactic extensions.

(Or is there a language with first-class variable names?)


Lauri Alanko
la@iki.fi