## A mature look into Monads

In previous posts concerning Haskell, I have talked a lot about Monads. The monadic concept is one of the most important topics in Haskell. It is one of its cornerstones. It is what makes Haskell excel. It is what makes Haskell such a successful and versatile purely functional language.

I have tried to explain the monadic concept and will continue to do so. So far, I have made considerable progress in this matter, but I want to do more. Previous posts, this, and some future posts will shed more light into Monads.

I am a member of Quora, a site where people post questions and other people answer them. (In the “About me” page of this blog, you can find the link with my answers in Quora.) In Quora, someone (I don’t know who) asked:

What is the importance of category theory in computer science?

This is a great question. My answer is in Quora, but I will repeat it here for posterity:

Computer Science gets concepts from Category Theory in order to enable developers to create high quality software. One such concept is Monads.

In functional programming, we can mathematically combine smaller programs to make complex ones and we can mathematically reason about the soundness and correctness of a program. But this is only if the program is purely functional.

A program is purely functional if it does not contain side effects. IO, state, and anything that deviates from functions that only take arguments as input and return a result, are all examples of side effects, i.e. computations that are not purely functional.

Monads help encapsulate and control side effects, in a manner that the programs do not loose their pure functional benefits (we can still mathematically combine smaller programs and we can still mathematically reason about the programs).

But encapsulation and control of side effects is not the only reason Monads are useful. Let me explain.

Whenever the developer creates a new type, she has to “explain” to the language how the new type will behave when it is invloved in function composition. A new type armed with that information (that the developer provides) is a Monad.

Thus, roughly, a Monad is a new type that the language knows how to treat when composing functions.

This new type can encapsulate side effects or just provide computations. In all cases, the soundness of the program is preserved and it is also easier to combine smaller programs in order to make larger ones. This is because the new type / monad is able to pack a lot of power without compromising the mathematical correctness of the program.

For example, one can design a monad that is parser. OK, a parser can be created in any language. But a monadic parser is able to hold not only the parsing outcome, but also any parsing errors, line numbers and lots of other information. All this useful and extra information can be though of as side effects, or side computations, but in reality it is all purely functional behavior.

So, Monads pack a lot of power. They can encapsulate side effects or just purely functional computations and at the same time carry all this around making function composition possible, mathematically accurate, and easy.

A program written with Monads may be way smaller and less messy than a program written without them. You have to see it to believe it.

And there are more concepts from Category Theory that define computations and have helped Computer Science. In addition to Monads, we have Functors and Arrows. Actually, these three are all related. From most specific to most general, these concepts go as follows:

Types -> New (Elevated) Types -> Functors -> Applicative Functors -> Monads -> Arrows.

Generally, Category Theory helps Computer Science by discovering “computational patterns”. Category Theory discovers them and studies them in order to find their mathematical properties. Then, Computer Science can make use of this knowledge in order to empower developers to create software in a more concise and more correct way.

You might ask why the paradigm I described here is better that the Object-Oriented Programming paradigm. OOP features encapsulation (in a completely different context, though) and it is very intuitive. The problem with OOP is that objects have state, which makes the OOP paradigm very different from the purely functional paradigm. Thus, we cannot *mathematically* reason about the correctness or scalability of an OOP program. And this is very unfortunate.

You might also ask if there is a “catch” with the paradigm I described here. The paradigm I described here is based upon computational patterns that Category Theory discovers and mathematically describes and studies, and then passes them on to Computer Science in order to be recognized and used in order to make programs more concise and correct.

Well, in my opinion, there is “catch” with this approach, a significant one. You have to be very knowledgeable to be able to program using this paradigm. Whereas OOP is intuitive, the concepts that stem from Category Theory are difficult to understand. You have to be thinking in terms of abstractions in order to grasp the Category Theory concepts that are used in Computer Science. You have to have abstract thought and to study a lot in order to be able to program using this paradigm.

You need to invest a lot of time, thought, and energy to be able to use the concepts of Category Theory in Computer Science. I believe that it is time, thought, and energy well spent. But you should decide for yourself.

So, this was my answer to the question posed in Quora. I tried to make my answer as all-encompassing as possible. Actually, I did not write the answer in its entirety at once. Instead, I wrote the first few paragraphs and posted them. Then, I realized I wanted to say more. So I added to my answer. And so on. All and all, I worked quite a bit for this answer. I think that we should pay particular attention to what I wrote in Quora, because we may uncover some interesting facts.

If a newcomer to Haskell asks as what Monads are, I find that a great response is that they are the method that we can introduce side effects to purely functional computations, while allowing the computations to keep their trait. This answer is correct and it is why Monads were introduced into Haskell. But as the newcomer progresses into Haskell and her knowledge begins to accumulate, this answer, although enabling at first, starts to become a hindrance.

You see, the newcomer will start to look for side effects whenever and wherever she encounters Monads. And the truth of the matter is that we do not need to have side effects in order to have Monads. We might have purely functional computations that it may help us to view as side effects, as side computations, or we may have no side computations at all.

For the following, consider a and b to be built-in Haskell types, like for example, Int and String.

A Monad is a type constructor armed with a return and a bind function. A type constructor (let’s call it M) creates a new type. The return function’s signature is a -> M a. Thus, for a type a, it creates the type M a.The bind function’s signature is M a -> (a -> M b) -> M b. Thus, it allows us to compose two functions of the form x -> M a and a -> M b. This is great, since the output M a of the first function is not of the same type as the input a of the second function.

Thus, a Monad “lifts” types to the M “realm”, and provides us of a way to compose two functions a -> M b and b -> M c. Of course, we will write the code for the return and bind functions. But when we finish writing this code, we have the functionality of the Monad.

So, this is what a Monad is: a type constructor M that we created (so that it can “lift” a type a to a type M a) and that we also made capable of composing functions of the form a -> M b.

Thus, a Monad M is a type constructor M that knows how to compose functions of type a -> M b. So, whenever we create a type constructor, we may as well “upgrade” it to a Monad, if we want the composing functionality.

When we create a type constructor M, we can “upgrade” it to a Functor by making it capable of “lifting” not only types, but also functions. And we accomplish this by providing the definition of (a -> b) -> M a -> M b. Thus, we instruct M on how to go from a function a -> b to a function M a -> M b.

We can also “upgrade” our Functor to an Applicative, by providing the definition of M (a -> b) -> M a -> M b. Thus, we instruct M on how to go from a function M (a -> b) to a function M a -> M b.

At last, we can upgrade our Applicative to a Monad, by providing the definition for M a -> (a -> M b) -> M b, which is the definition on how to compose two functions of types x -> M a and a -> M b.

The way we define and implement our type constructor will define its functionality and whether it produces side effects or not. We can create a very useful type constructor, like the list type constructor, without ever having to involve side computations or side effects. We can also create a very useful type constructor, like Maybe, that bestows no side effects to pure functionality, but that we can imagine in our mind that it carries side computations or “side results”. Indeed, Maybe can be though of holding either a result, or the lack of it. And in our mind, the lack of a result can be thought as a “side result”. But this is nothing that upsets the pure functionality of the computations. It is just a clever a convenient way to pack more into one thing and reduce the amount of code we have to write. For example, with Maybe we can avoid writing lots of cascading ifs.

Om the other hand, there are Monads that do perform side effects, like the IO Monad. These are all the more useful, since the introduce side effects in such a way as to not upset the pure functionality of the application. And they achieve that while allowing us to compose functions of the form a -> M b.

So, yes, Monads allow us to introduce controllable side effects to pure functionality, but they do not have to. Monads are just type constructors that know how to perform type composition. Whenever one wants to create a type constructor, she might “upgrade” it to a Monad as well. It is up to the creator of the type constructor to define its purpose and its functionality.

One of the type constructors that have been turned to a Monad is the well-studied Maybe. As a type constructor, Maybe’s definition is

```data Maybe a = Nothing | Just a
```

There are no side effects here. Maybe encapsulates the result or the lack of it. This is very convenient. As a type constructor, Maybe provides the functionality it was designed for. When Maybe is turned into a Monad (by providing a return function and a bind function), it knows how to compose functions of type a -> Maybe b and b -> Maybe c. So, as a Monad, Maybe is a type constructor that knows how to compose two such functions and thus provides this functionality of composition.

The real power of a Monad comes from the definition of the underlying type constructor. When the type constructor is “upgraded” to a Monad, it gains the functionality of function composition. That is just all there is to it.

In my blog post titled “The disappointing State Monad”, I studied the State Monad. Its definition is:

```newtype State s a = State { runState :: s -> (a, s) }
```

(The “newtyoe” keyword does what the “data” does. It introduces a new type. There is a subtle difference between “newtype” and “data”, but it is beyond the scope of this blog post.)

I have characterized the State Monad as disappointing, because its definition carries no side effects. The state s is just a value carried and passed around, but the program never loses even the tiniest bit of pure functionality. We can think the state s as “state”, but in reality, it is just a value.

On the other hand, the IO Monad really provides a Monad that encompasses  side effects. The IO Moand really does IO, which is a side effect to pure functionality.

I would like to give one last example of a type constructor that has been “upgraded” to a Monad. It is the Parser type constructor:

```newtype Parser a = Parser (String -> [(a, String)])
```

The purpose of the Parser type constructor is to provide a framework with which to parse input, like text strings.

If you want to find out more about Parser, I recommend the paper titled “Monadic Parsing in Haskell” by Graham Hutton and Erik Meijer. Search the Internet for it. It is easy to find.

In the paper I just mentioned, it is described how Parser can be “upgraded” to a Monad. Again, by upgrading Parser to a Monad, all we achieve is to instruct it on how to compose functions of types a -> Parser b and b -> Parser c. Parser’s real power comes from its definition. It tales a String and provides a list of tuples. Each tuple has an object of type a (usually a Tree) and a string. This allows us to parse a beginning portion of an input String and provide the parsing output as an object of type a (the tuple’s first element) and the rest of the String as a String (the tuple’s second element).  We need a list of tuples, because we may need to provide more than one answer, as in the case of ambiguous grammars. Ans the empty list denotes failure. Thus, as a type constructor / Monad, Parser can be very valuable and very powerful. Still, there are no side effects whatsoever in its definition. Parser’s output is [(a, String)] and there is the result and the remaining String in there. We can decide to view whatever part of the output we want as a side computation or side effect, but we do not have to. In reality, the output is purely functional and we do not have to view it differently.

Thus, a Monad is a type constructor that has been taught on how to behave when functions of it are composed. The type constructor may introduce side effects to pure functionality, but it certainly does not have to. And the Monadic concept only guides us on the function composition functionality part. Do we need to “upgrade” all our type constructors to Monads? Well, we do if we are going to use function composition with them. Otherwise, we can leave them as they are.