[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Icon




   Date: Fri, 21 Dec 2001 02:53:02 +0200
   From: Lauri Alanko <la@iki.fi>

   On Thu, Dec 20, 2001 at 06:09:52PM -0500, Guy Steele wrote:
   > See?  it's easy!  :-)
   
   It's even easier in Haskell:

Yes, this is very nice!  One way to look at a Monad is that
it provides a way to map whatever data structure is being
used as a "result value" into some other kind of data structure
(plus providing a useful algebra on such data structures).
Of course, the "data structure" might be a function.

   \begin{code}
   module M where
   
   -- use some standard monad combinators
   import Monad
   
   -- define the monad type, which is precisely of the form Guy described.
   -- Here r is a type parameter for the ultimate return value, and a is
   -- the type of value that this M will generate.
   newtype M r a = M { runM :: (a -> r -> r) -> r -> r }

I think you are able to identify "failure continuation"
with "result value" because Haskell is lazy.  In an eager
language, wouldn't the signature be more like

	runM :: (a -> r -> r) -> (() -> r) -> r

?

   -- make it a monad, meaning:
   instance Monad (M r) where
       -- a pure value is converted to an M by applying the success 
       -- continuation to it
       return x = M (\sc -> sc x)
       -- when we want to operate on an M, we simply apply it to a success
       -- continuation that applies the function
       M m >>= f = M (\sc -> m (\x -> runM (f x) sc))
   
   instance MonadPlus (M r) where
       -- A failure means simply returning the failure value
       mzero = M (\sc fc -> fc)
       -- while combining two Ms means giving one's return value as the other's 
       -- failure argument (. means function composition)

More precisely (or, in an eager language), the failure continuation of one
is a computation () -> r that will evaluate the other?

       M m `mplus` M m' = M (\sc -> m sc . m' sc)

And the wonderful thing about the monad formulation is that,
despite the fact that it is not obvious at first (and I half
thought there was an error here), mzero is indeed a left identity
as well as a right identity for mplus!

   -- Some silly cruft to use numerical operations conveniently
   instance (Num n) => Eq (M r n)
   instance (Num n) => Show (M r n)
   instance (Num n) => Num (M r n) where
       (+) = liftM2 (+)
   
   -- A set of values is defined by making each of the pure values an M (by 
   -- return), and then combining them with mplus (msum = foldl mplus mzero)
   
   to lower upper = msum (map return [lower..upper])
   
   \end{code}
   
   This technique is described in detail in Ralf Hinze's ICFP 2000 paper
   <http://citeseer.nj.nec.com/hinze00deriving.html>.
   
   Here's an example:
   
   M> runM (to 1 3 + to 4 6) (:) []
   [5,6,7,6,7,8,7,8,9]
   
   Here we used list construction as the way to combine values. Another
   option:
   
   M> runM (to 1 3 + to 4 6) (\a b -> print a >> b) (return ())
   5
   6
   7
   6
   7
   8
   7
   8
   9
   
   Here we use first-class IO actions as the return value, which is a
   characteristic feature of Haskell.
   
   Note that there is no need for any syntax transformations, this is pure
   Haskell. You can get surprisingly far without macros when just about
   everything is a first-class value...

A similar thing might be achieved in Lisp by decreeing that
every function must return a pair of a value and a failure
continuation.  But enforcing this decree would be difficult.
Haskell's polymorphic inferred type system is at once dictatorial
and unobtrusive.

--Guy