This does not appear to implement a monadic computation. A monadic computation can examine intermediate steps of the computation and perform flow control decisions, i.e. in Haskell:
showFlowControl :: Person -> Maybe County
showFlowControl p = do
country <- personCountry p
case country of
USA -> do
state <- state country
county state
SomethingElse -> do
province <- province country
provCounty province
_ -> Nothing
(The implied data structure is a bit bizarre, but, bear with me.)
Unless I am misreading the code, this is the rather frequent misunderstanding that monads are just fancy chained method calls, which does not suffice to capture the monadic behavior. Note merely trying to stuff "if" statements into the calls won't be sufficient, because you must actually affect the chain that is called; either you call "state" or you call "providence" but you must decide that "in" the monadic computation, not in a mere method chaining.
Mere method chaining is not monadic. There is, I believe, no clever way to implement monadic computations in Javascript without nested functions; every layer of the monadic computation nests, and in most languages this can not be hidden in any practical way. This is one of the major reasons monadic computations are impractical in most languages.
I think bind is essentially correct too, but the failure to use it properly tends to suggest a level of understanding too small to be writing Yet Another Monad Tutorial. I've seen real JS monads, and I believe they are irretrievably ugly to use, with the need for nested functions.
Helpful aside: "Province", not "providence". The former is an administrative division of a larger polity; the latter is the grace of God, which could lead to some confusion when mentioned in this context.
Oops. No idea where that came from; I know better. Perhaps the backwards state of SomethingElse believes not only in the divine right of Kings, but the divine right of Dukes and perhaps even other nobility as well...
Well, if you assume both a peerage and a divine-right monarchy, then the divine right of peers seems more or less implicit, since they're created as such by the Crown, and a monarch anointed and guided by God would not fail to recognize who among her subjects is worthy of elevation. Subordination doesn't seem hard to handle in such a system; a duke's word, for example, would be God-given law save where it happened to conflict with the dictates of the Queen, an earl's likewise in relation to a duke, and on down the line, and presumably such conflicts could be handled by any noble of higher rank than the one found to be in conflict -- an earl, for example, might censure an unruly baron.
…come to think about it, given a few global string replacements, haven't we just more or less reinvented both the British peerage and the Roman Catholic Church?
So, I don't get all the pain. There are a lot of 'rules' around monads, but conceptually they are widgets in a workflow.
In imperative programming you say what you want to do and do it at the same time.
With monads, you assemble a chain, just like dropping boxes on a workflow UI. Just like a workflow, you have 'defined' what to do, but not done anything with it yet. When you give it input, the chain performs the transforms you have defined for it, and you can pull out the result.
Again, not a 'technical definition' but this paints a picture in people's heads that they can appreciate and follow it up with details.
I made a port of some Scala Monads to Javascript. Mostly, Option and Either. Makes for some really nice, and you won't find yourself doing (typeof foobar === "undefined").
Suggestion for the author. Reformat the code of your translated example. I truly found it impenetrable until I formatted it more like idiomatic d3.js code. With this reformatting, I'll buy that it is less annoying and tedious than the original.
I'm not sure this is actually using the fact that maybe is monadic - I think this much can be done with applicative functors. In order to really be monadic, you'd need to run the next bind inside the function passed to the previous bind.
Uh, no. This precludes the possibility of saying "I know this cannot fail (modulo seriously crazy situations like hardware failure or OOM)", which lets you significantly reduce the number of spurious exceptional conditions you need to handle, which reduces code noise if you actually try and treat every impossible exceptional case as if it were possible and increases safety if you don't.
This criticism makes no sense; I don't know what you think monadic computations are, but it isn't what you think. I'd be more specific, but I honestly don't know what you're thinking.
The fact that the posted explanation is wrong doesn't help, I'm sure.
This misses the more simple explanation of Monads:
They're functions provided by the language core,
that cannot be implemented in the language itself,
because the language rules forbid it.
This should be the first thing any of these introductions explain. Afterwards they can spend however long they like explaining all the rules for which Monads are solutions and how they get around them, safe in the knowledge that the reader will at least know why monads exist and why people are going to all these efforts.
I assume that definition is motivated by the Haskell IO monad, but it's unfortunately inaccurate. Haskell IO is provided by the language core, but that's not what makes it a monad - in older versions of Haskell IO was not monadic, but the language still provided IO operations.
Monad is just an interface that the IO type supports, which allows you to structure your code in a particular way (an imperative-looking sequence of instructions and assignments) that people find more intuitive and convenient. You can still use IO without using the monadic interface (e.g. it's also a Functor).
Conversely, most other Haskell monads are implemented in pure Haskell and are fairly easy to reimplement yourself. The State monad in the standard library is pure Haskell code.
How does a complete Haskell program using non-monadic IO look like?
As for "most other Haskell monads", from what i can tell they're just wrappers around the core monads with additional functionality tacked on and i consider it lazy to also call them monads instead of wrappers.
Not really, they are independent of IO (you may be thinking of monad transformers, which are ways to extend an existing monad.)
For example, this is the entire definition of the Maybe monad:
data Maybe a = Nothing | Just a
deriving (Eq, Ord)
instance Functor Maybe where
fmap _ Nothing = Nothing
fmap f (Just a) = Just (f a)
instance Monad Maybe where
(Just x) >>= k = k x
Nothing >>= _ = Nothing
(Just _) >> k = k
Nothing >> _ = Nothing
return = Just
fail _ = Nothing
It's just a way to propagate failure when sequencing computations. (That's why the "Nothings" are always followed with more "Nothings" in the definition.)
I'm beginning to see how my definition is wrong. Right now it seems to me like the simple definition of a Monad is: It's a prettied-up workaround.
A bit i think i need to clarify: In my opinion most monad explanations fall entirely flat because they explain WHAT a monad does, but NEVER explain WHY they were created in the first place; making the whole exercise somewhat akin to mathematics teaching in us-american schools.
With what all the responses pointed out it seems like monads get created when a pain point of a language is so often, and so consistently worked around in the same fashion; that someone decided to make an implementation of the workaround that looks like basic syntax; regardless of whether the implementation happens in core or in the language itself.
Thus for example Moose, a library that bolts object orientation syntax onto Perl, would probably also be a monad.
In Haskell there is a kind of syntactic sugar called "do-notation" that simplifies working with monadic code and is bolted onto the core of the language. But it is not necessary, you can program with monads without it.
"It it seems like monads get created when a pain point of a language is so often, and so consistently worked around in the same fashion"
I'm not sure what to think about this answer. The first paragraph doesn't seem to address anything i said. The linked article seems to agree with my understanding, from the preamble, but contains too much Haskell syntax for me to ne able to understand what he's explaining, so i can't say for sure. And i know even less about the mathematical aspect.
However, still, thanks for letting me know that i might be moving in the right direction.
Nope, definitely still off track. Typeclasses are a refactoring and abstraction tool used here, but monads are just an abstraction. They exist whether or not you reorganize your code. Even when you're using tuples and the like to implement state you still have a monad, you're just footing the bill on a bunch of complexity to implement it.
This the same as how people talk about javascript and C as "having monads in their semicolons". Semicolons do behave much like monads. They're just not first-class nor officially recognized as such.
> Typeclasses are a refactoring and abstraction tool used here, but monads are just an abstraction. They exist whether or not you reorganize your code. Even when you're using tuples and the like to implement state you still have a monad, you're just footing the bill on a bunch of complexity to implement it.
I literally have no idea what any of that means. It does not seem in any way to explain why you disagree with my understanding.
For example:
> They exist whether or not you reorganize your code.
This seems like the definition of a non-sequitur to me. Replacing a few words, this sentence reads to me semantically the same as
"Anonymous subroutines exist whether or not you reorganize your code."
A true statement, but not one that conveys any previously unknown meaning.
Also please note that nothing you said addresses the WHY of monads.
Monads don't have a why. They are simply a common pattern. Typeclasses enable polymorphism in Haskell and polymorphism allows for boilerplate reduction and code reuse. Monads can be factored out using typeclasses and usually are in Haskell. You use monads whether or not you're using Haskell and whether or not your language has built-in state manipulation. Certain types in Haskell are monadic and thus provide examples of functionality which shares that common pattern. Haskell exploits all of this in many ways. First, by tagging effects with types you have better documentation and safer programs. Second, by organizing many distinct types along their common monadic pattern you reduce boilerplate and provide a simple API. Third, by exploiting that API generically you build up a set of language constructs which operate "over all monads" which makes for a powerful construction kit. Fourth, by exploiting some algebraic structure that is a natural consequence of the generic pattern of monads you can decompose and recombine them into what are called transformer stacks.
Monads have no relation whatsoever to boilerplate reduction besides the fact that if you (a) notice their existence and (b) operate in a language with powerful enough generics then you can factor that common pattern to reduce boilerplate using that generics support.
If you recognize monads exist and don't have generics support you can still use the concept to better analyze your code and seek out ways of layering in new effects if desirable. You just won't reduce the boilerplate. This is the case with most languages which implement monads as a library, javascript, ruby, clojure, erlang.
Well, since you refuse to use english on a level that can be understood by a reasonably well-taught non-american i can only tell you that while i have heard all the words you use before and understand their individual meanings, i have no idea what those sentences are meant to communicate when put together.
There is a few ways you can speak of "the WHY of monads". Most basically, monads are one way of 1) combining things 2) which (may) produce a value, 3) where which things you combine in later may depend on earlier values produced.
Why do we speak of them, as opposed to just combining things in that way when we happen to? Partly, for the reasons people generally speak of design patterns: it's one fewer degree of freedom in your design which makes it easier to reason about and when others know the pattern it is easier to communicate what you are doing to them. In one-off cases, in most languages with (comparatively) limited powers of abstraction, this is most of what's useful about saying "hey, that's a monad!".
In Haskell, "Monad" is also an interface for things that fit that pattern, and we have a lot of functions that operate against that interface to do useful things, so that we don't need to rewrite those things for each new thing that happens to be a monad.
For instance:
mapM :: Monad m => (a -> m b) -> [a] -> m [b]
which is like map, but takes a function that returns a monadic value instead of any function, and instead of producing a list of those monadic values it stitches together the values and gives you a wrapped list of the values. To drive intuition, we can specialize this for a few individual monads:
mapM :: (a -> Maybe b) -> [a] -> Maybe [b]
Maybe. Pass it a function that may fail, and a list of values to feed into that function, and it will give you "Just" a list of the results or "Nothing" if any function fails.
mapM :: (a -> Identity b) -> [a] -> Identity [b]
Identity. As you would pretty much expect, this is isomorphic to map. Not a lot of reason to do this unless you've got some other function you care about which is expecting something of this form and you want to basically pass map.
mapM :: (a -> [b]) -> [a] -> [[b]]
List. The list monad is for nondeterminism - "take every path". I... don't immediately see a good use for mapM in list - something like concatMap is much more useful - but if you ever have a need for it...
mapM :: (a -> IO b) -> [a] -> IO [b]
IO. Pass in a function that returns an IO action, and a list of parameters for that function, and give back an IO action that produces a list of the results. A simple example:
and would, when executed, prompt the user for height and weight in turn, and return a list of tuples containing both the label and the response. You can see how you could trivially add "age".
mapM :: (a -> State s b) -> [a] -> State s [b]
State. State actions operate on an additional piece of data that's carried around and threaded through the chain of binds. This specialization of mapM takes a function that produces actions which use or modify state, threads them together into one big such action.
mapM :: (a -> Parser b) -> [a] -> Parser [b]
Parser, from the wonderful attoparsec. Here, we pass in a function that generates parsers, and a list of inputs to that function, and get a parser that parses a list!
There are a ton more similarly abstract, comparably useful functions, some of which are in http://hackage.haskell.org/package/base-4.6.0.1/docs/Control... - and you can always write something new when there's a pattern you want to exploit for which a combinator doesn't exist. All of this - both the implementations of the monads (except the internals of IO) and the functions that operate on them - is written in ordinary Haskell. You can check out the source on Hackage, if you're curious as to how.
mapM is implemented:
mapM f as = sequence (map f as)
If we inline sequence, that becomes:
mapM f as = foldr k (return []) ms
where
k m m' = do { x <- m; xs <- m'; return (x:xs) }
We can strip out the do notation:
mapM f as = foldr k (return []) ms
where
k m m' = m >>= (\ x -> m' >>= (\ xs -> return (x:xs)))
where it should be somewhat apparent there's not actually any voodoo.
You might be conflating monads with do-notation? Do-notation is a syntactic tweak to make using monads a little prettier, and is implemented in the language, but you can just as well use monads without it. Consider the IO example:
main = getArgs >>= mapM_ putStrLn
where
getArgs :: IO [String]
putStrLn :: String -> IO ()
mapM_ :: (a -> m b) -> [a] -> m ()
A lot of monads in the mtl (State, Writer, Reader, &c) are implemented by applying their transformer to Identity. Your point stands, as that's not the only way to implement them, isn't the way many other monads are implemented, and Identity itself is implemented just in Haskell code, but it's a possible source of some of the confusion.
Most monads for which you can implement a monad transformer are implemented by wrapping the monad transformer around Identity, but that's simply because you would otherwise be implementing basically the same thing twice. My impression has been that they were originally implemented directly and only transformer-ified later. Certainly, there is no reason that couldn't have been done. Identity doesn't involve any particular voodoo:
newtype Identity a = Identity { runIdentity :: a }
instance Monad Identity where
return a = Identity a
m >>= k = k (runIdentity m)
Haskell originally didn't include monadic IO - it was introduced in version 1.3, IIRC. IO looked like lazy streams of events, but it was pretty messy and pretty possible to deadlock, from what I hear. Writing modern Haskell, doing IO without any use of the IO type isn't possible, but you can - in principle - avoid ever using the fact that it's a monad by simply calling unsafePerformIO immediately on any IO action. Good luck actually ordering your IO operations, though.
Excepting that your explanation is not at all true.
Even if you have mutable state you might want to use the State monad to implement it. Even if you have continuations you might want to use the Cont monad to implement it. No language has Parser built into the language itself—it's really just an invention available when you start to reify contexts as datatypes.
Monad is about abstracting over a common pattern of sequencing side effects. These can be available in the language or not. Once you reflect that sequencing into a monad then it often becomes reified in such a way that makes side effects easier to track and understand.
> Even if you have mutable state you might want to use the State monad to implement it.
Why? If you can just write whatever you need in the language purely itself, why would you want to do that. Please be detailed and try to keep your dictionary simple. Using words like "reified" doesn't really help anyone to understand what you're trying to say, when it's known and accepted that the topic is very complicated.
Because passing around tuples and pattern matching against them becomes tiring. So you wrap that code repetition pattern around in a State. The State monad is implemented in terms of the language itself, it's not adding anything to the language in terms of computability or expressiveness. It's simply making you type less, for something you could already write in the language itself.
All of these things - Functors, Applicative, Monads, Lenses - are all design patterns in functional programming. In the same way you can use an Observer pattern in OOP, you can use a Monad pattern in functional programming. Neither adds neither computational nor expressive power to the language, but it does avoid code repetition.
(1) Backtracking is easy with the state monad since you can snapshot your entire "world" without trouble and replay it as needed.
(2) Monads cause you to tag your types with effects that they are using. This means that your code is self-documenting about the side-effects it performs. In non-pure languages you still have to trust that the code isn't breaking convention, but it's a very powerful documentation methodology.
(3) Creation of arbitrarily complex effects via stacking. While your language might have state and even non-determinism, it probably doesn't have their combination built-in. Their combination forms a very nice backtracking, non-deterministic parser almost automatically (see a loonnngg comment of mine elsewhere in this thread).
(4) Optimization and compilation. While not directly possible with vanilla State, once you begin to use monads to represent effects it's easy to go back through and optimize your entire "language" by optimizing your monad implementations. You can even optimize them right into other targets like the Accelerate library which cross-compiles certain fragments of your Haskell—which look like mostly garden-variety normal Haskell—into GPU code.
I guess what you're saying is that
1. a core implementation is simply more powerful.
and
2. my question doesn't really have much with the nature of monads to do.
Can you give an example of a language rule which a Monad might implement? I sort of understand what you mean, but an example would help me (and maybe others?) form a more concrete understanding.
And if at all possible, don't use Haskell being unable to mutate state as an example. Maybe an example for Javascript or Python?
"The predefined unary and binary operators and any user-defined operators that exist for value types may also be used by nullable types. These operators produce a null value if the operands are null; otherwise, the operator uses the contained value to calculate the result."
This can be modeled as a Monad with a notion of "failure". In Haskell that would be the Maybe monad. You would have to "lift" the arithmetic functions into it and work with "Maybe Int" values instead of "Int" values.
An if we swapped the Maybe monad with the List (nondeterminism) monad, we could work with list of values and obtain a list of results for all possible combinations (instead of just one result or failure.)
The point where a JS developer is most likely to run into monads is with asynchronous code. We call it "callback hell," and the reason it exists is because async callbacks, like monads, can't be easily "escaped" - everything done with a result must be used in the same monadic (asynchronous) context. Once you enter a callback-based function, you usually can't simply return from it and go back to standard sequentially-executed code.
It takes a bit more work to formally show that continuations follow the monad laws, but that's as much as it took for me to begin to get the concept.
1. Functional purity must be maintained for all code. This means for example that no side effects (IO) may happen. Thus some monads exist for the simple purpose of doing IO.
2. As you mentioned, state may not be mutated. To circumvent that there's a monad that simply marks a code block as "variables of this type may be changed in here, as long as they don't get out of this scope".
3. I only know about this vaguely, but as i understand it Haskell is strictly typed. This makes some things inconvenient, so Maybe allows some loose typing.
I can't give examples in JS, but i actually got one for python: Decorators. In Python you cannot implement a function with this specification:
It takes exactly these inputs in this order:
1) a function object,
2) a function name,
3) a multiline anonymous function.
It calls the function passed as the first parameter,
with the third parameter as its parameter.
It takes the result, which is expected to be a function,
and installs it in its calling scope as a new function,
with the name passed in the second parameter.
Since this is not possible to implement in pure python, decorators were implemented in the core. I'm not 100% sure, but i think they actually do qualify as monads.
1. IO doesn't have to be a monad, it simply was designed that way because the monad interface is so nice. In older (very functional) versions of Haskell this wasn't the case.
2. That's honestly not a bad description of the State monad spoken using procedural vocabular, but it is very far away from an accurate representation. At the very least it's worth noting that this isn't a syntactic thing—all of the elements are first-class.
3. Maybe is still strictly typed. It causes the possibility of null to be represented at the type level so that nobody ever accidentally forgets to check it. I never get null errors in Haskell.
Finally, decorators are Functors perhaps, if you strain to look at them that way, but not at all monads.
> Finally, decorators are Functors perhaps, if you strain to look at them that way, but not at all monads.
You can't say such a thing and then run away without saying why not. How is anyone supposed to learn about this if nobody explains the why? :)
Edit:
> 3. Maybe is still strictly typed. It causes the possibility of null to be represented at the type level so that nobody ever accidentally forgets to check it. I never get null errors in Haskell.
Let me reword point 3. Haskell is strict about types in that it asks you to do a lot of checking. Using Maybe you can make Haskell behave like a loose language that doesn't require any checking from you.
If my syntax makes sufficient sense (g := a translates as "assign a to g"). That means that decoration is nothing much more than function composition which forms a very particular functor. That same Functor also provides a monad, but continuing to work this example until that's clear is so far from obvious that I think it'd only serve to mislead.
If it isn't a monad, but provides one, then this seems to be the perfect example to explain on, since decorators are understood and monads are something smaller than decorators and as such should be able to be explained by reducing the explanation of decorators.
Also do keep in mind that i didn't say decorators were a monad in any language, just in python, because they cause a curious side effect that is otherwise only attainable with much self-repeating and hard-to-read syntax.
Yes. Decorators do not provide a monad at all, not even in Python. The very dim relation between the two is academic at best and completely irrelevant at worst.
I'm not trolling, we're just talking around one another. There are a significant number of concepts that aren't easily explained in plain english required to get at the heart of "what is a monad?". Even that question itself is already making an interesting choice of definition and set of questions that can be confusing unless analyzed carefully (in particular "Monad" as a noun can mean something very different from monad as an adjective and both meanings are frequently used in descriptions of "what monads are").
I'm attempting to be strongly consistent in everything I'm saying, but it's not necessarily possible to both do that and keep speaking simply.
The why of monads is that they're a very nice API or Interface that allows you to write easily analyzed and easily refactored programs. Most of the other descriptions are the why of a particular type which happens to instantiate the API.
'There are a significant number of concepts that aren't easily explained in plain english required to get at the heart of "what is a monad?".'
Indeed, it's arguably not even the right question. "When is something monadic?" is a better one; "monad" is an adjective, not a noun. In Java terms, "Monad" is an interface and something is "monadic" if it implements the interface. (Except Java's type system can not express "Monad", so that doesn't work as a concrete example, but the idea is there.)
I'm sure you've seen me claim this on /r/haskell, where I think people operate in a context where they find this so obvious that the very idea that there could be a problem is confusing... this is the sort of context that leads me to that claim, though. Regardless of whether it's a noun in math, in programming it's an adjective.
Okay, I'll explain it. Apologies for the apparent contradiction—I'm not trying to move the goal posts, just trying to avoid going into the weeds.
As I said, Python decorators are essentially just a syntax for function composition.
@deco
def g(args): g := deco(g(args): body)
body
where (g := a) means a is assigned to the name g. As it turns out, functions form a type which is a functor over the result type. The fmap (arrow map) is just function composition. In Haskell you would write this as
instance Functor ((->) r) where
fmap g f = g . f
where the (.) syntax is also function composition, very similar to the @deco syntax in Python. In this sense, I stated that @deco is a Functor.
Now, the function data type also admits a Monad instance (which some people abuse terminology to say "functions are monads" but I highly dislike that terminology when speaking carefully).
instance Monad ((->) r) where
return a = \_ -> a
(f >>= g) x = g (f x) x
So in this way, the function type instantiates both Functor and Monad, but composition (which is (.) in Haskell and @deco is Python) is not directly related to the Monad.
Instead, a function's Monad instance allows us to "defer application" and translate something like
\x -> (1 + x, 2 + x)
into
do a <- (1+)
b <- (2+)
return (a, b)
In this format it's pretty stupid looking, but we also tend to call this monad instance the "Reader Monad" which indicates computations which occur under a fixed set of configuration information—some static global variables.
For instance, if I had a Key type which represented some encryption key I could change my encrypt and decrypt functions from
which, allows us to defer that extra `Key` argument until later.
So, that's what I meant by saying that @deco isn't the Monad that its Functor represents—it only has the composition aspect, not the deferred application aspect.
For the record, Data.Dynamic has a type called Dynamic which can be used to weaken typing. It's not a monad, though - it only implements Show and Typeable (and recently Exception, apparently).
Here's an interesting one: let's implement backtracking non-deterministic parsing as a Monad. If the code below is complex remember that it really is a near complete implementation of a backtracking non-deterministic parser from scratch using no libraries.
We'll do it first without any mention of the M-word. A parser is the kind of thing that consumes a string and returns a parsed value along with some "leftover" string data. For instance, a digit parser would look like (in Haskell types)
We can begin here and incrementally add more features. For instance, what happens when digitParse is applied to "foo". It ought to fail! To solve this, we'll introduce the Maybe (or Option) type
Finally, if we have a parser which is sensitive to both apples and apricots then we might want to augment our return type with a notion of multiplicity of successes. We'll use a list for this and I want to note specially that we can now get rid of the notion of failure as represented by Maybe because we can use use the empty list ([]) to mean "parse failure".
All together we can call this type a Parser. A Parser which attempts to parse a polymorphic type can be considered as well, so we'll leave that as a type variable
newtype Parser a = Parser (String -> [(a, String)])
runParser :: Parser a -> String -> [(a, String)]
runParser (Parser f) = f
Now we can update the previous type `digitParse`
digitParse :: Parser Int
---
All by itself, Parsers aren't useful. We need to be able to combine them to make sense of a larger string. For instance, let's use the original notion of digitParser to build twoDigitParser
Not too bad. We could even use recursion to parse a whole list of digits.
manyDigitParser :: String -> ([Int], String)
manyDigitParser input = let (i, leftover) = digitParser input
(is, leftover') = manyDigitParser leftover
in ( i : is, leftover' )
To make this truly work we'd want to layer in the notions of failure and non-determinism afforded by the Maybe type and the list type, but this simple example should suggest that writing all of this stuff out would be a LOT of boilerplate.
We'd like to build up to using monads to reduce all of that boilerplate. Ultimately we'll be able to define the fully non-deterministic `manyDigitParser` as
manyDigitParser :: Parser [Int]
manyDigitParser = do
i <- digitParser
is <- manyDigitParser
return (i : is)
Monads depend on Functors, so we'll begin there. Functor just means that if we have a type Parser ty1 we can transform it into a type Parser ty2. For instance, let's say I have a function that converts lists of Int to a single Int by gluing digits together.
glueDigits :: [Int] -> Int
A Functor over Parser would let me turn manyDigitParser into integerParser
integerParser :: Parser Int
integerParser = fmap glueDigits manyDigitParser
To write it we need to define fmap polymorphically to just reach in modify the return type of our Parser. We'll destruct and reconstruct a Parser type to do this.
instance Functor Parser where
fmap modify (Parser parse) = Parser (\input ->
let results = parse f
in map (\(return, leftover) -> (modify return, leftover)) results
)
Now `integerParser` is perfectly valid code and Parser is an instance of Functor.
Now we get to the meat of things. We'll define a notion of Applicative, another superclass of Monad. Applicative just says two things--
1. We ought to be able to make a constant parser--one which always succeeds, always returns some constant value, and doesn't consume any input
constantParser :: a -> Parser a
constantParser a = Parser (\string -> [(a, String)])
2. We ought to be able to take a parser which returns a function and apply it to a parser which returns an input to that function as if they were regular values. We'll do so by running the function parser first and then the value parser and combining their results. In types that looks like
apParser :: Parser (a -> b) -> Parser a -> Parser b
apParser (Parser fparse) (Parser aparse) =
Parser (\input ->
[ (fun aval, aleftovers) | (aval, aleftovers) <- aparse fleftovers
, (fun, fleftovers) <- fparse input
]
)
The above syntax is just a list comprehension exactly like Python's. We use these two functions to form another typeclass instance for Applicative and reveal the generic names of constantParser and apParser
instance Applicative Parser where
pure = constantParser
(<*>) = apParser
---
Finally, we can move to the brunt of the problem: the Monad instance. Again, the entire goal here is just to abstract out the methods of sequencing parsers that we did manually at the very beginning. We'll do it first for the basic parser type.
newtype BasicParser a = BasicParser (String -> (a, String))
Monadic sequencing is a little strange to grasp at first. Like Applicative we need a way to inject constants into BasicParser.
instance Monad BasicParser where
return a = BasicParser (\input -> (a, input))
...
But we also need to somehow consolidate those lets from above. To do this we'll write a function called bind (but written (>>=)) that has the type (BasicParser a -> (a -> BasicParser b) -> BasicParser b).
BasicParser parser >>= continuation =
BasicParser (\input -> let (a, leftovers1) = parser input
(b, leftovers2) = runParser (continuation a) leftovers1
in (b, leftovers2)
)
Notice how we had to use that exact same let construct to define the parser? That's why it gets abstracted away.
We can do the same for the more general non-deterministic, failing parser
So while that's a fairly large amount of abstract code, the end result is that we've embedded a fairly complex effect into our language—one that no reasonable language would have built in for you---non-determistic backtracking parsing. We can also centralize our optimizations of that parser into the monadic code and reap the benefits across the entire codebase.
Furthermore, most of what I wrote above, while abstract, requires almost no creativity as it just comes from recognizing that our Parser monad is a certain, standard combination of the State and List monads.
Wow. I have to say, at least on a cursory reading, this might be one of the best descriptions of what a Monad is. I feel like I understand them a little bit more just from going over what you wrote quickly. I'm going to go over this with a fine tooth comb when I have a little more time to do so. (was eating some lunch at work, and I don't know Haskell syntax very well, so while I picked up some gists of what you meant, I need to actually take the time to parse what the syntax means)
Thank you so much for taking the time to write this all out. This kind of answer is one of the reasons I still read HN comments :)
I'm glad! I was a bit worried it was a lot of code all at once, but Monads are best grasped by lots of examples that flex the API instead of talking about them in the abstract. Feel free to email me if you have further questions.
(And I'm halfway tempted to write my own "Oi, fine, here's what a monad is" post soon enough. They're really not that strange, but it's easy to get things confused.)
Alright, so it really is just a Scala class with some handy shortcuts. Please do note though that the docs you linked do not think it is a monad, just that it's easy to think of it as one.
Edit: In fact, it kinda smells like how Perl works when you do this:
I used Future as well as maybe - http://m50d.github.io/2013/01/16/generic-contexts.html . I think it's important to give at least two or three examples, because the whole point of the idea of a monad is that it's an abstraction that several different datatypes conform to.
List is probably the minimal intro you can do without trapping yourself. It's too easy to implement something that can do the 1-or-0 elements in Maybe, but still isn't a Monad. You should also do a conditional, which I believe this code fails; that is, a monadic computation can examine the intermediate results and make flow-control decisions.
The minimal example is probably something like:
minimal :: Int -> [(Int, Int)]
minimal i = do
x <- [i, i + 1]
y <- [i, 10 * i]
if i `mod` 2 == 0
then return (x, y)
else return (y, x)
The evenness or oddness of the parameter controls which side of the tuple gets which element.
It may still be possible to implement this without implementing a full monadic computation, but if you can pass this test using something like bind or join, you're probably at least on the right track.
Unless I am misreading the code, this is the rather frequent misunderstanding that monads are just fancy chained method calls, which does not suffice to capture the monadic behavior. Note merely trying to stuff "if" statements into the calls won't be sufficient, because you must actually affect the chain that is called; either you call "state" or you call "providence" but you must decide that "in" the monadic computation, not in a mere method chaining.
Mere method chaining is not monadic. There is, I believe, no clever way to implement monadic computations in Javascript without nested functions; every layer of the monadic computation nests, and in most languages this can not be hidden in any practical way. This is one of the major reasons monadic computations are impractical in most languages.