Jump to content

Haskell/Applicative functors

From Wikibooks, open books for an open world
(Redirected from Haskell/Applicative Functors)

When covering the vital Functor and Monad type classes, we glossed over a third type class: Applicative, the class for applicative functors. Like monads, applicative functors are functors with extra laws and operations; in fact, Applicative is an intermediate class between Functor and Monad. Applicative is a widely used class with a wealth of applications. It enables the eponymous applicative style, a convenient way of structuring functorial computations, and also provides means to express a number of important patterns.

Functor recap

[edit | edit source]

We will begin with a quick review of the Functor class chapter. Functor is characterised by the fmap function:

class Functor f where
    fmap :: (a -> b) -> f a -> f b

If a type is an instance of Functor, you can use fmap to apply a function to values in it. Another way of describing fmap is saying that it promotes functions to act on functorial values. To ensure fmap works sanely, any instance of Functor must comply with the following two laws:

fmap id = id                   -- 1st functor law
fmap (g . f) = fmap g . fmap f -- 2nd functor law

Maybe, for example, has a Functor instance, and so we can easily modify the value inside it...

Prelude> fmap negate (Just 2)
Just (-2)

...as long as it exists, of course.

Prelude> fmap negate Nothing
Nothing

fmap has an infix synonym, (<$>). It often helps readability, and also suggests how fmap can be seen as a different kind of function application.

Prelude> negate <$> Just 2
Just (-2)
Exercises

Define instances of Functor for the following types:

  1. A rose tree, defined as: data Tree a = Node a [Tree a]
  2. Either e for a fixed e.
  3. The function type ((->) r). In this case, f a will be (r -> a)

Application in functors

[edit | edit source]

As useful as it is, fmap isn't much help if we want to apply a function of two arguments to functorial values. For instance, how could we sum Just 2 and Just 3? The brute force approach would be extracting the values from the Maybe wrapper. That, however, would mean having to do tedious checks for Nothing. Even worse: in a different Functor extracting the value might not even be an option (just think about IO).

We could use fmap to partially apply (+) to the first argument:

Prelude> :t (+) <$> Just 2
(+) <$> Just 2 :: Num a => Maybe (a -> a)

But now we are stuck: we have a function and a value both wrapped in Maybe, and no way of applying one to the other. The closest we can get to the desired Just 5 is probably this:

Prelude> (<$> Just 3) <$> ((+) <$> Just 2)
Just (Just 5)

(note that (+) <$> Just 2 is Just (2+)).

What we would like to have is an operator with a type akin to f (a -> b) -> f a -> f b to apply functions in the context of a functor. If that operator was called (<*>), we would be able to write:

(+) <$> Just 2 <*> Just 3

Lo and behold - that works!

Prelude> (+) <$> Just 2 <*> Just 3
Just 5

The type of (<*>) is:

Prelude> :t (<*>)
(<*>) :: Applicative f => f (a -> b) -> f a -> f b

(<*>) is one of the methods of Applicative, the type class of applicative functors - functors that support function application within their contexts. Expressions such as (+) <$> Just 2 <*> Just 3 are said to be written in applicative style, which is as close as we can get to regular function application while working with a functor. If you pretend for a moment the (<$>), (<*>) and Just aren't there, our example looks just like (+) 2 3.

The Applicative class

[edit | edit source]

The definition of Applicative is:

class (Functor f) => Applicative f where
    pure  :: a -> f a
    (<*>) :: f (a -> b) -> f a -> f b

Beyond (<*>), the class has a second method, pure, which brings arbitrary values into the functor. As an example, let's have a look at the Maybe instance:

instance Applicative Maybe where
    pure                  = Just
    (Just f) <*> (Just x) = Just (f x)
    _        <*> _        = Nothing

It doesn't do anything surprising: pure wraps the value with Just; (<*>) applies the function to the value if both exist, and results in Nothing otherwise.

Applicative functor laws

[edit | edit source]

Note

For the lack of a better shorthand, in what follows we will use the jargony word morphism to refer to the values to the left of (<*>), which fit the type Applicative f => f (a -> b); that is, the function-like things inserted into an applicative functor. "Morphism" is a term which comes from category theory and which has a much wider meaning, but that needn't concern us now.


Just like Functor, Applicative has a set of laws which reasonable instances should follow. They are:

pure id <*> v = v                            -- Identity
pure f <*> pure x = pure (f x)               -- Homomorphism
u <*> pure y = pure ($ y) <*> u              -- Interchange
pure (.) <*> u <*> v <*> w = u <*> (v <*> w) -- Composition

Those laws are a bit of a mouthful. They become easier to understand if you think of pure as a way to inject values into the functor in a default, featureless way, so that the result is as close as possible to the plain value. Thus:

  • The identity law says that applying the pure id morphism does nothing, exactly like with the plain id function.
  • The homomorphism law says that applying a "pure" function to a "pure" value is the same as applying the function to the value in the normal way and then using pure on the result. In a sense, that means pure preserves function application.
  • The interchange law says that applying a morphism to a "pure" value pure y is the same as applying pure ($ y) to the morphism. No surprises there - as we have seen in the higher order functions chapter, ($ y) is the function that supplies y as an argument to another function.
  • The composition law says that pure (.) composes morphisms similarly to how (.) composes functions: applying the composed morphism pure (.) <*> u <*> v to w gives the same result as applying u to the result of applying v to w.[1]

There is also a bonus law about the relation between fmap and (<*>):

fmap f x = pure f <*> x                      -- fmap

Applying a "pure" function with (<*>) is equivalent to using fmap. This law is a consequence of the other ones, so you need not bother with proving it when writing instances of Applicative.

Exercises
  1. Check that the Applicative laws hold for this instance for Maybe
  2. Write Applicative instances for
    a. Either e, for a fixed e
    b. ((->) r), for a fixed r

Déjà vu

[edit | edit source]

Does pure remind you of anything?

pure :: Applicative f => a -> f a

The only difference between that and...

return :: Monad m => a -> m a

... is the class constraint. pure and return serve the same purpose; that is, bringing values into functors. The uncanny resemblances do not stop here. Back in the chapter about State we mentioned a function called ap...

ap :: (Monad m) => m (a -> b) -> m a -> m b

... which could be used to make functions with many arguments less painful to handle in monadic code:

allTypes :: GeneratorState (Int, Float, Char, Integer, Double, Bool, Int)
allTypes = liftM (,,,,,,) getRandom
                     `ap` getRandom
                     `ap` getRandom
                     `ap` getRandom
                     `ap` getRandom
                     `ap` getRandom
                     `ap` getRandom

ap looks a lot like (<*>).

Those, of course, are not coincidences. Monad inherits from Applicative...

Prelude> :info Monad
class Applicative m => Monad (m :: * -> *) where
--etc.

... because return and (>>=) are enough to implement pure and (<*>) [2].

pure = return
(<*>) = ap

ap u v = do
    f <- u
    x <- v
    return (f x)

Several other monadic functions have more general applicative versions. Here are a few of them:

Monadic Applicative Module
(where to find the applicative version)
(>>) (*>) Prelude (GHC 7.10+); Control.Applicative
liftM2 liftA2 Control.Applicative
mapM traverse Prelude (GHC 7.10+); Data.Traversable
sequence sequenceA Data.Traversable
forM_ for_ Data.Foldable


Exercises
  1. Write a definition of (<*>) using (>>=) and fmap. Do not use do-notation.
  2. Implement
    liftA5 :: Applicative f => (a -> b -> c -> d -> e -> k)
    -> f a -> f b -> f c -> f d -> f e -> f k

ZipList

[edit | edit source]

Lists are applicative functors as well. Specialised to lists, the type of (<*>) becomes...

[a -> b] -> [a] -> [b]

... and so (<*>) applies a list of functions to another list. But exactly how is that done?

The standard instance of Applicative for lists, which follows from the Monad instance, applies every function to every element, like an explosive version of map.

Prelude> [(2*),(5*),(9*)] <*> [1,4,7]
[2,8,14,5,20,35,9,36,63]

Interestingly, there is another reasonable way of applying a list of functions. Instead of using every combination of functions and values, we can match each function with the value in the corresponding position in the other list. A Prelude function which can be used for that is zipWith:

Prelude> :t zipWith
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
Prelude> zipWith ($) [(2*),(5*),(9*)] [1,4,7]
[2,20,63]

When there are two useful possible instances for a single type, the dilemma is averted by creating a newtype which implements one of them. In this case, we have ZipList, which lives in Control.Applicative:

newtype ZipList a = ZipList { getZipList :: [a] }

We have already seen what <*> should be for zip-lists; all that is needed is to add the newtype wrappers:

instance Applicative ZipList where
    (ZipList fs) <*> (ZipList xs) = ZipList (zipWith ($) fs xs)
    pure x                        = undefined -- TODO

As for pure, it is tempting to use pure x = ZipList [x], following the standard list instance. We can't do that, however, as it violates the applicative laws. According to the identity law:

pure id <*> v = v

Substituting (<*>) and the suggested pure, we get:

ZipList [id] <*> ZipList xs = ZipList xs
ZipList (zipWith ($) [id] xs) = ZipList xs

Now, suppose xs is the infinite list [1..]:

ZipList (zipWith ($) [id] [1..]) = ZipList [1..]
ZipList [1] = ZipList [1..]
[1] = [1..] -- Obviously false!

The problem is that zipWith produces lists whose length is that of the shortest list passed as argument, and so (ZipList [id] <*>) will cut off all elements of the other zip-list after the first. The only way to ensure zipWith ($) fs never removes elements is making fs infinite. The correct pure follows from that:

instance Applicative ZipList where
    (ZipList fs) <*> (ZipList xs) = ZipList (zipWith ($) fs xs)
    pure x                        = ZipList (repeat x)

The ZipList applicative instance offers an alternative to all the zipN and zipWithN functions in Data.List which can be extended to any number of arguments:

>>> import Control.Applicative
>>> ZipList [(2*),(5*),(9*)] <*> ZipList [1,4,7]
ZipList {getZipList = [2,20,63]}
>>> (,,) <$> ZipList [1,4,9] <*> ZipList [2,8,1] <*> ZipList [0,0,9]
ZipList {getZipList = [(1,2,0),(4,8,0),(9,1,9)]}
>>> liftA3 (,,) (ZipList [1,4,9]) (ZipList [2,8,1]) (ZipList [0,0,9])
ZipList {getZipList = [(1,2,0),(4,8,0),(9,1,9)]}


Sequencing of effects

[edit | edit source]

As we have just seen, the standard Applicative instance for lists applies every function in one list to every element of the other. That, however, does not specify (<*>) unambiguously. To see why, try to guess what is the result of [(2*),(3*)]<*>[4,5] without looking at the example above or the answer just below.

Prelude> [(2*),(3*)] <*> [4,5]


--- ...


[8,10,12,15]

Unless you were paying very close attention or had already analysed the implementation of (<*>), the odds of getting it right were about even. The other possibility would be [8,12,10,15]. The difference is that for the first (and correct) answer the result is obtained by taking the skeleton of the first list and replacing each element by all possible combinations with elements of the second list, while for the other possibility the starting point is the second list.

In more general terms, the difference between is one of sequencing of effects. Here, by effects we mean the functorial context, as opposed to the values within the functor (some examples: the skeleton of a list, actions performed in the real world in IO, the existence of a value in Maybe). The existence of two legal implementations of (<*>) for lists which only differ in the sequencing of effects indicates that [] is a non-commutative applicative functor. A commutative applicative functor, by contrast, leaves no margin for ambiguity in that respect. More formally, a commutative applicative functor is one for which the following holds:

liftA2 f u v = liftA2 (flip f) v u -- Commutativity

Or, equivalently,

f <$> u <*> v = flip f <$> v <*> u

By the way, if you hear about commutative monads in Haskell, the concept involved is the same, only specialised to Monad.

Commutativity (or the lack thereof) affects other functions which are derived from (<*>) as well. (*>) is a clear example:

(*>) :: Applicative f => f a -> f b -> f b

(*>) combines effects while preserving only the values of its second argument. For monads, it is equivalent to (>>). Here is a demonstration of it using Maybe, which is commutative:

Prelude> Just 2 *> Just 3
Just 3
Prelude> Just 3 *> Just 2
Just 2
Prelude> Just 2 *> Nothing
Nothing
Prelude> Nothing *> Just 2
Nothing

Swapping the arguments does not affect the effects (that is, the being and nothingness of wrapped values). For IO, however, swapping the arguments does reorder the effects:

Prelude> (print "foo" *> pure 2) *> (print "bar" *> pure 3)
"foo"
"bar"
3
Prelude> (print "bar" *> pure 3) *> (print "foo" *> pure 2)
"bar"
"foo"
2

The convention in Haskell is to always implement (<*>) and other applicative operators using left-to-right sequencing. Even though this convention helps reducing confusion, it also means appearances sometimes are misleading. For instance, the (<*) function is not flip (*>), as it sequences effects from left to right just like (*>):

Prelude> (print "foo" *> pure 2) <* (print "bar" *> pure 3)
"foo"
"bar"
2

For the same reason, (<**>) :: Applicative f => f a -> f (a -> b) -> f b from Control.Applicative is not flip (<*>). That means it provides a way of inverting the sequencing:

>>> [(2*),(3*)] <*> [4,5]
[8,10,12,15]
>>> [4,5] <**> [(2*),(3*)]
[8,12,10,15]

An alternative is the Control.Applicative.Backwards module from transformers, which offers a newtype for flipping the order of effects:

newtype Backwards f a = Backwards { forwards :: f a }
>>> Backwards [(2*),(3*)] <*> Backwards [4,5]
Backwards [8,12,10,15]
Exercises
  1. For the list functor, implement from scratch (that is, without using anything from Applicative or Monad directly) both (<*>) and its version with the "wrong" sequencing of effects,
    (<|*|>) :: Applicative f => f (a -> b) -> f a -> f b
  2. Rewrite the definition of commutativity for a Monad using do-notation instead of ap or liftM2.
  3. Are the following applicative functors commutative?
    a. ZipList
    b. ((->) r)
    c. State s (Use the newtype definition from the State chapter. Hint: You may find the answer to exercise 2 of this block useful.)
  4. What is the result of [2,7,8] *> [3,9]? (Try to guess without writing.)
  5. Implement (<**>) in terms of other Applicative functions.
  6. As we have just seen, some functors allow two legal implementations of (<*>) which are only different in the sequencing of effects. Why there is not an analogous issue involving (>>=)?

A sliding scale of power

[edit | edit source]

Functor, Applicative, Monad. Three closely related functor type classes; three of the most important classes in Haskell. Though we have seen many examples of Functor and Monad in use, and a few of Applicative, we have not compared them head to head yet. If we ignore pure/return for a moment, the characteristic methods of the three classes are:

fmap :: Functor f => (a -> b) -> f a -> f b
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(>>=) :: Monad m => m a -> (a -> m b) -> m b

While those types look disparate, we can change the picture with a few cosmetic adjustments. Let's replace fmap by its infix synonym, (<$>); (>>=) by its flipped version, (=<<); and tidy up the signatures a bit:

(<$>) :: Functor     t =>   (a ->   b) -> (t a -> t b)
(<*>) :: Applicative t => t (a ->   b) -> (t a -> t b)
(=<<) :: Monad       t =>   (a -> t b) -> (t a -> t b)

Suddenly, the similarities are striking. fmap, (<*>) and (=<<) are all mapping functions over Functors [3]. The differences between them are in what is being mapped over in each case:

  • fmap maps arbitrary functions over functors.
  • (<*>) maps t (a -> b) morphisms over (applicative) functors.
  • (=<<) maps a -> t b functions over (monadic) functors.

The day-to-day differences in uses of Functor, Applicative and Monad follow from what the types of those three mapping functions allow you to do. As you move from fmap to (<*>) and then to (>>=), you gain in power, versatility and control, at the cost of guarantees about the results. We will now slide along this scale. While doing so, we will use the contrasting terms values and context to refer to plain values within a functor and to whatever surrounds them, respectively.

The type of fmap and the two associated laws ensure that it is impossible to use it to change the context, no matter which function it is given. In (a -> b) -> t a -> t b, the (a -> b) function has nothing to do with the t context of the t a functorial value, and so applying it cannot affect the context. For that reason, if you do fmap f xs on some list xs the number of elements of the list will never change.

Prelude> fmap (2*) [2,5,6]
[4,10,12]

That can be taken as a safety guarantee or as an unfortunate restriction, depending on what you intend. In any case, (<*>) is clearly able to change the context:

Prelude> [(2*),(3*)] <*> [2,5,6]
[4,10,12,6,15,18]

The t (a -> b) morphism carries a context of its own, which is combined with that of the t a functorial value. (<*>), however, is subject to a more subtle restriction. While t (a -> b) morphisms carry context, within them there are plain (a -> b), which are still unable to modify the context. That means the changes to the context (<*>) performs are fully determined by the context of its arguments, and the values have no influence over the resulting context.

Prelude> (print "foo" *> pure (2*)) <*> (print "bar" *> pure 3)
"foo"
"bar"
6
Prelude> (print "foo" *> pure 2) *> (print "bar" *> pure 3)
"foo"
"bar"
3
Prelude> (print "foo" *> pure undefined) *> (print "bar" *> pure 3)
"foo"
"bar"
3

Thus with list (<*>) you know that the length of the resulting list will be the product of the lengths of the original lists, with IO (<*>) you know that all real-world effects will happen as long as the evaluation terminates, and so forth.

With Monad, however, we are in a very different game. (>>=) takes a a -> t b function, and so it is able to create context from values. That means a lot of flexibility:

Prelude> [1,2,5] >>= \x -> replicate x x
[1,2,2,5,5,5,5,5]
Prelude> [0,0,0] >>= \x -> replicate x x
[]
Prelude> return 3 >>= \x -> print $ if x < 10 then "Too small" else "OK"
"Too small"
Prelude> return 42 >>= \x -> print $ if x < 10 then "Too small" else "OK"
"OK"

Taking advantage of the extra flexibility, however, might mean having fewer guarantees about, for instance, whether your functions are able to unexpectedly erase parts of a data structure for pathological inputs, or whether the control flow in your application remains intelligible. In some situations there might be performance implications as well, as the complex data dependencies monadic code makes possible might prevent useful refactorings and optimisations. All in all, it is a good idea to only use as much power as needed for the task at hand. If you do need the extra capabilities of Monad, go right ahead; however, it is often worth it to check whether Applicative or Functor are sufficient.

Exercises

The next few exercises concern the following tree data structure:
data AT a = L a | B (AT a) (AT a)

  1. Write Functor, Applicative and Monad instances for AT. Do not use shortcuts such as pure = return. The Applicative and Monad instances should match; in particular, (<*>) should be equivalent to ap, which follows from the Monad instance.
  2. Implement the following functions, using either the Applicative instance, the Monad one or neither of them, if neither is enough to provide a solution. Between Applicative and Monad, choose the least powerful one which is still good enough for the task. Justify your choice for each case in a few words.
    a. fructify :: AT a -> AT a, which grows the tree by replacing each leaf L with a branch B containing two copies of the leaf.
    b. prune :: a -> (a -> Bool) -> AT a -> AT a, with prune z p t replacing a branch of t with a leaf carrying the default value z whenever any of the leaves directly on it satisfies the test p.
    c. reproduce :: (a -> b) -> (a -> b) -> AT a -> AT b, with reproduce f g t resulting in a new tree with two modified copies of t on the root branch. The left copy is obtained by applying f to the values in t, and the same goes for g and the right copy.
  3. There is another legal instance of Applicative for AT (the reversed sequencing version of the original one doesn't count). Write it. Hint: this other instance can be used to implement
    sagittalMap :: (a -> b) -> (a -> b) -> AT a -> AT b
    which, when given a branch, maps one function over the left child tree and the other over the right child tree.
(In case you are wondering, "AT" stands for "apple tree". Botanist readers, please forgive the weak metaphors.)

The monoidal presentation

[edit | edit source]

Back in Understanding monads, we saw how the Monad class can be specified using either (>=>) or join instead of (>>=). In a similar way, Applicative also has an alternative presentation, which might be implemented through the following type class:

class Functor f => Monoidal f where
    unit  :: f ()
    (*&*) :: f a -> f b -> f (a,b)

There are deep theoretical reasons behind the name "monoidal" [4]. In any case, we can informally say that it does look a lot like a monoid: unit provides a default functorial value whose context wraps nothing of interest, and (*&*) combines functorial values by pairing values and combining effects. The Monoidal formulation provides a clearer view of how Applicative manipulates functorial contexts. Naturally, unit and (*&*) can be used to define pure and (<*>), and vice-versa.

The Applicative laws are equivalent to the following set of laws, stated in terms of Monoidal:

fmap snd $ unit *&* v = v                    -- Left identity
fmap fst $ u *&* unit = u                    -- Right identity
fmap asl $ u *&* (v *&* w) = (u *&* v) *&* w -- Associativity
-- asl (x, (y, z)) = ((x, y), z)

The functions to the left of the ($) are just boilerplate to convert between equivalent types, such as b and ((), b). If you ignore them, the laws are a lot less opaque than in the usual Applicative formulation. By the way, just like for Applicative there is a bonus law, which is guaranteed to hold in Haskell:

fmap (g *** h) (u *&* v) = fmap g u *&* fmap h v -- Naturality
-- g *** h = \(x, y) -> (g x, h y)
Exercises
  1. Write implementations for unit and (*&*) in terms of pure and (<*>), and vice-versa.
  2. Formulate the law of commutative applicative functors (see the Sequencing of effects section) in terms of the Monoidal methods.
  3. Write from scratch Monoidal instances for:
    a. ZipList
    b. ((->) r)

Notes

  1. The corresponding property for (.) and plain functions is (u . v) w = u (v w).
  2. And if the Monad instance follows the monad laws, the resulting pure and (<*>) will automatically follow the applicative laws.
  3. It is not just a question of type signatures resembling each other: the similarity has theoretical ballast. One aspect of the connection is that it is no coincidence that all three type classes have identity and composition laws.
  4. For extra details, follow the leads from the corresponding section of the Typeclasseopedia and the blog post by Edward Z. Yang which inspired it.