Can you give an example of a language rule which a Monad might implement? I sort of understand what you mean, but an example would help me (and maybe others?) form a more concrete understanding.
And if at all possible, don't use Haskell being unable to mutate state as an example. Maybe an example for Javascript or Python?
"The predefined unary and binary operators and any user-defined operators that exist for value types may also be used by nullable types. These operators produce a null value if the operands are null; otherwise, the operator uses the contained value to calculate the result."
This can be modeled as a Monad with a notion of "failure". In Haskell that would be the Maybe monad. You would have to "lift" the arithmetic functions into it and work with "Maybe Int" values instead of "Int" values.
An if we swapped the Maybe monad with the List (nondeterminism) monad, we could work with list of values and obtain a list of results for all possible combinations (instead of just one result or failure.)
The point where a JS developer is most likely to run into monads is with asynchronous code. We call it "callback hell," and the reason it exists is because async callbacks, like monads, can't be easily "escaped" - everything done with a result must be used in the same monadic (asynchronous) context. Once you enter a callback-based function, you usually can't simply return from it and go back to standard sequentially-executed code.
It takes a bit more work to formally show that continuations follow the monad laws, but that's as much as it took for me to begin to get the concept.
1. Functional purity must be maintained for all code. This means for example that no side effects (IO) may happen. Thus some monads exist for the simple purpose of doing IO.
2. As you mentioned, state may not be mutated. To circumvent that there's a monad that simply marks a code block as "variables of this type may be changed in here, as long as they don't get out of this scope".
3. I only know about this vaguely, but as i understand it Haskell is strictly typed. This makes some things inconvenient, so Maybe allows some loose typing.
I can't give examples in JS, but i actually got one for python: Decorators. In Python you cannot implement a function with this specification:
It takes exactly these inputs in this order:
1) a function object,
2) a function name,
3) a multiline anonymous function.
It calls the function passed as the first parameter,
with the third parameter as its parameter.
It takes the result, which is expected to be a function,
and installs it in its calling scope as a new function,
with the name passed in the second parameter.
Since this is not possible to implement in pure python, decorators were implemented in the core. I'm not 100% sure, but i think they actually do qualify as monads.
1. IO doesn't have to be a monad, it simply was designed that way because the monad interface is so nice. In older (very functional) versions of Haskell this wasn't the case.
2. That's honestly not a bad description of the State monad spoken using procedural vocabular, but it is very far away from an accurate representation. At the very least it's worth noting that this isn't a syntactic thing—all of the elements are first-class.
3. Maybe is still strictly typed. It causes the possibility of null to be represented at the type level so that nobody ever accidentally forgets to check it. I never get null errors in Haskell.
Finally, decorators are Functors perhaps, if you strain to look at them that way, but not at all monads.
> Finally, decorators are Functors perhaps, if you strain to look at them that way, but not at all monads.
You can't say such a thing and then run away without saying why not. How is anyone supposed to learn about this if nobody explains the why? :)
Edit:
> 3. Maybe is still strictly typed. It causes the possibility of null to be represented at the type level so that nobody ever accidentally forgets to check it. I never get null errors in Haskell.
Let me reword point 3. Haskell is strict about types in that it asks you to do a lot of checking. Using Maybe you can make Haskell behave like a loose language that doesn't require any checking from you.
If my syntax makes sufficient sense (g := a translates as "assign a to g"). That means that decoration is nothing much more than function composition which forms a very particular functor. That same Functor also provides a monad, but continuing to work this example until that's clear is so far from obvious that I think it'd only serve to mislead.
If it isn't a monad, but provides one, then this seems to be the perfect example to explain on, since decorators are understood and monads are something smaller than decorators and as such should be able to be explained by reducing the explanation of decorators.
Also do keep in mind that i didn't say decorators were a monad in any language, just in python, because they cause a curious side effect that is otherwise only attainable with much self-repeating and hard-to-read syntax.
Yes. Decorators do not provide a monad at all, not even in Python. The very dim relation between the two is academic at best and completely irrelevant at worst.
I'm not trolling, we're just talking around one another. There are a significant number of concepts that aren't easily explained in plain english required to get at the heart of "what is a monad?". Even that question itself is already making an interesting choice of definition and set of questions that can be confusing unless analyzed carefully (in particular "Monad" as a noun can mean something very different from monad as an adjective and both meanings are frequently used in descriptions of "what monads are").
I'm attempting to be strongly consistent in everything I'm saying, but it's not necessarily possible to both do that and keep speaking simply.
The why of monads is that they're a very nice API or Interface that allows you to write easily analyzed and easily refactored programs. Most of the other descriptions are the why of a particular type which happens to instantiate the API.
'There are a significant number of concepts that aren't easily explained in plain english required to get at the heart of "what is a monad?".'
Indeed, it's arguably not even the right question. "When is something monadic?" is a better one; "monad" is an adjective, not a noun. In Java terms, "Monad" is an interface and something is "monadic" if it implements the interface. (Except Java's type system can not express "Monad", so that doesn't work as a concrete example, but the idea is there.)
I'm sure you've seen me claim this on /r/haskell, where I think people operate in a context where they find this so obvious that the very idea that there could be a problem is confusing... this is the sort of context that leads me to that claim, though. Regardless of whether it's a noun in math, in programming it's an adjective.
Okay, I'll explain it. Apologies for the apparent contradiction—I'm not trying to move the goal posts, just trying to avoid going into the weeds.
As I said, Python decorators are essentially just a syntax for function composition.
@deco
def g(args): g := deco(g(args): body)
body
where (g := a) means a is assigned to the name g. As it turns out, functions form a type which is a functor over the result type. The fmap (arrow map) is just function composition. In Haskell you would write this as
instance Functor ((->) r) where
fmap g f = g . f
where the (.) syntax is also function composition, very similar to the @deco syntax in Python. In this sense, I stated that @deco is a Functor.
Now, the function data type also admits a Monad instance (which some people abuse terminology to say "functions are monads" but I highly dislike that terminology when speaking carefully).
instance Monad ((->) r) where
return a = \_ -> a
(f >>= g) x = g (f x) x
So in this way, the function type instantiates both Functor and Monad, but composition (which is (.) in Haskell and @deco is Python) is not directly related to the Monad.
Instead, a function's Monad instance allows us to "defer application" and translate something like
\x -> (1 + x, 2 + x)
into
do a <- (1+)
b <- (2+)
return (a, b)
In this format it's pretty stupid looking, but we also tend to call this monad instance the "Reader Monad" which indicates computations which occur under a fixed set of configuration information—some static global variables.
For instance, if I had a Key type which represented some encryption key I could change my encrypt and decrypt functions from
which, allows us to defer that extra `Key` argument until later.
So, that's what I meant by saying that @deco isn't the Monad that its Functor represents—it only has the composition aspect, not the deferred application aspect.
For the record, Data.Dynamic has a type called Dynamic which can be used to weaken typing. It's not a monad, though - it only implements Show and Typeable (and recently Exception, apparently).
Here's an interesting one: let's implement backtracking non-deterministic parsing as a Monad. If the code below is complex remember that it really is a near complete implementation of a backtracking non-deterministic parser from scratch using no libraries.
We'll do it first without any mention of the M-word. A parser is the kind of thing that consumes a string and returns a parsed value along with some "leftover" string data. For instance, a digit parser would look like (in Haskell types)
We can begin here and incrementally add more features. For instance, what happens when digitParse is applied to "foo". It ought to fail! To solve this, we'll introduce the Maybe (or Option) type
Finally, if we have a parser which is sensitive to both apples and apricots then we might want to augment our return type with a notion of multiplicity of successes. We'll use a list for this and I want to note specially that we can now get rid of the notion of failure as represented by Maybe because we can use use the empty list ([]) to mean "parse failure".
All together we can call this type a Parser. A Parser which attempts to parse a polymorphic type can be considered as well, so we'll leave that as a type variable
newtype Parser a = Parser (String -> [(a, String)])
runParser :: Parser a -> String -> [(a, String)]
runParser (Parser f) = f
Now we can update the previous type `digitParse`
digitParse :: Parser Int
---
All by itself, Parsers aren't useful. We need to be able to combine them to make sense of a larger string. For instance, let's use the original notion of digitParser to build twoDigitParser
Not too bad. We could even use recursion to parse a whole list of digits.
manyDigitParser :: String -> ([Int], String)
manyDigitParser input = let (i, leftover) = digitParser input
(is, leftover') = manyDigitParser leftover
in ( i : is, leftover' )
To make this truly work we'd want to layer in the notions of failure and non-determinism afforded by the Maybe type and the list type, but this simple example should suggest that writing all of this stuff out would be a LOT of boilerplate.
We'd like to build up to using monads to reduce all of that boilerplate. Ultimately we'll be able to define the fully non-deterministic `manyDigitParser` as
manyDigitParser :: Parser [Int]
manyDigitParser = do
i <- digitParser
is <- manyDigitParser
return (i : is)
Monads depend on Functors, so we'll begin there. Functor just means that if we have a type Parser ty1 we can transform it into a type Parser ty2. For instance, let's say I have a function that converts lists of Int to a single Int by gluing digits together.
glueDigits :: [Int] -> Int
A Functor over Parser would let me turn manyDigitParser into integerParser
integerParser :: Parser Int
integerParser = fmap glueDigits manyDigitParser
To write it we need to define fmap polymorphically to just reach in modify the return type of our Parser. We'll destruct and reconstruct a Parser type to do this.
instance Functor Parser where
fmap modify (Parser parse) = Parser (\input ->
let results = parse f
in map (\(return, leftover) -> (modify return, leftover)) results
)
Now `integerParser` is perfectly valid code and Parser is an instance of Functor.
Now we get to the meat of things. We'll define a notion of Applicative, another superclass of Monad. Applicative just says two things--
1. We ought to be able to make a constant parser--one which always succeeds, always returns some constant value, and doesn't consume any input
constantParser :: a -> Parser a
constantParser a = Parser (\string -> [(a, String)])
2. We ought to be able to take a parser which returns a function and apply it to a parser which returns an input to that function as if they were regular values. We'll do so by running the function parser first and then the value parser and combining their results. In types that looks like
apParser :: Parser (a -> b) -> Parser a -> Parser b
apParser (Parser fparse) (Parser aparse) =
Parser (\input ->
[ (fun aval, aleftovers) | (aval, aleftovers) <- aparse fleftovers
, (fun, fleftovers) <- fparse input
]
)
The above syntax is just a list comprehension exactly like Python's. We use these two functions to form another typeclass instance for Applicative and reveal the generic names of constantParser and apParser
instance Applicative Parser where
pure = constantParser
(<*>) = apParser
---
Finally, we can move to the brunt of the problem: the Monad instance. Again, the entire goal here is just to abstract out the methods of sequencing parsers that we did manually at the very beginning. We'll do it first for the basic parser type.
newtype BasicParser a = BasicParser (String -> (a, String))
Monadic sequencing is a little strange to grasp at first. Like Applicative we need a way to inject constants into BasicParser.
instance Monad BasicParser where
return a = BasicParser (\input -> (a, input))
...
But we also need to somehow consolidate those lets from above. To do this we'll write a function called bind (but written (>>=)) that has the type (BasicParser a -> (a -> BasicParser b) -> BasicParser b).
BasicParser parser >>= continuation =
BasicParser (\input -> let (a, leftovers1) = parser input
(b, leftovers2) = runParser (continuation a) leftovers1
in (b, leftovers2)
)
Notice how we had to use that exact same let construct to define the parser? That's why it gets abstracted away.
We can do the same for the more general non-deterministic, failing parser
So while that's a fairly large amount of abstract code, the end result is that we've embedded a fairly complex effect into our language—one that no reasonable language would have built in for you---non-determistic backtracking parsing. We can also centralize our optimizations of that parser into the monadic code and reap the benefits across the entire codebase.
Furthermore, most of what I wrote above, while abstract, requires almost no creativity as it just comes from recognizing that our Parser monad is a certain, standard combination of the State and List monads.
Wow. I have to say, at least on a cursory reading, this might be one of the best descriptions of what a Monad is. I feel like I understand them a little bit more just from going over what you wrote quickly. I'm going to go over this with a fine tooth comb when I have a little more time to do so. (was eating some lunch at work, and I don't know Haskell syntax very well, so while I picked up some gists of what you meant, I need to actually take the time to parse what the syntax means)
Thank you so much for taking the time to write this all out. This kind of answer is one of the reasons I still read HN comments :)
I'm glad! I was a bit worried it was a lot of code all at once, but Monads are best grasped by lots of examples that flex the API instead of talking about them in the abstract. Feel free to email me if you have further questions.
(And I'm halfway tempted to write my own "Oi, fine, here's what a monad is" post soon enough. They're really not that strange, but it's easy to get things confused.)
And if at all possible, don't use Haskell being unable to mutate state as an example. Maybe an example for Javascript or Python?