# LED Lighting

We just replaced our entire hallway lighting with LEDs, in total, that’s 8 halogen bulbs gone. I have to say, I’m fairly impressed for a first generation technology, it’s not perfect, but it does work well.

## The good, the bad and the ugly

The new bulbs aren’t as bright as the old halogens, having said that, we bought some of the cheapest LED lights there are, little 1W babies, it’s possible to get ones that are much brighter.

The new bulbs also give off a slightly cooler light than the old, but not as cold as I expected, they’re entirely acceptable in the hallway.

## Some maths

The new bulbs cost €5 each, and have a life time of 50,000 hours, as I said, they’re 1W bulbs, so they’re gonna use about 50kWh in their life.

The old bulbs also cost about €5 each, and have a lifetime of 750 hours, they were 25W bulbs. Over the life time of the LED lights, I would have to replace them 67 times, and the would use 1,250kWh.

Electricity costs about €0.15 per kWh at the moment, and I can only imagine that will go up. Lets make a conservative estimate that over the next 50 years, the average price will be €0.20 per kWh.

That puts the price of an LED light over the next 50 years at €15 – €5 for the bulb, and €10 for the electricity. The halogen bulbs meanwhile cost €585 – €335 for the bulbs and €250 for electricity.

I knew that LEDs cost less over time, but honestly, I had no idea it was that much that you saved.

# Collecting Non-Memory Resources

## A Problem

Let us consider a small problem. We would like to manage resources using a Haskell program, that are not just memory. For the sake of argument we will consider GPU resources. This can be reasonably straight forwardly done by using the IO monad to essentially write an imperative program that manages the resources. But doesn’t this defeat the point of functional programming? We’re losing so many benefits that we normally get, we no longer get to describe only the result of our program, instead we have to describe how to get to it too. Not only that, but we’ve lost our wonderful garbage collection system that allows us to easily avoid all of those nasty segfaults we see in non-managed languages. So, the problem today is, how do we extend the Haskell garbage collector (preferably without playing with the runtime or compiler) to be able to manage all these resources.

## An attempt

Let’s consider only one small subset of GPU resources – a shader. What we would like in our Haskell program is a pure value that represents the shader, which we can call on at a later date. We’d like a function that takes our shader code, and produces this pure value, and we’d like the resources on the GPU to be collected when the value is no longer in scope.

```compile :: String -> String -> Shader
compile vertexShdrSrc fragShdrSrc = s
where
s = doCompile s vertexShdrSrc fragShdrSrc

{-# NOINLINE doCompile #-}
doCompile  s vertexShdrSrc fragShdrSrc =
unsafePerformIO \$ do
{- Set up our fancy pants shader stuff here -}
addFinalizer s {- Remove resources from the GPU here -}
```

What we hope will happen is that we return our shader – s, with a finalizer attached to it. When the garbage collector collects s, it will also collect the resources off the GPU. This all looks rather good, so lets try using it:

```myShader :: Shader
```

The result of evaluating myShader is a constant use of s, the definition of this constant is looked up, and replaces it, so myShader is now defined as the right hand side of s. Unfortunately, there’s now nothing that points at s itself, so it’s garbage collected, and all our resources removed from the graphics card.

## Conclusion

We’ve tried to find a way of getting automated collection of non-memory resources, but ultimately, not quite got there. I don’t see a way forward from this point, and would love to hear other people’s input on how this sort of management can be done

# Cabal’s default install location

Cabal’s default install location is somewhat controversial – Many people seem to like the default of user installs, while many others would prefer that it matched all other software and installed globally. The assumption amongst the community at the moment is that “most” want user installs. I wanted to find out if they’re right. If you’re a Haskeller, please vote, it’ll take a lot less time than voting for a new logo

# Bottoms

In Haskell, every type has at least one value – ⊥. The empty type is not actually empty – it has ⊥ in it, sum types contain the sum of the two types plus one extra element – ⊥, etc.

But we don’t always need to do this. The ⊥ value of a type has one important feature, it’s the least defined value in the type. So let’s investigate the four primitive types:

Empty – as this type has no values, there’s obviously no least defined one, so we definitely need to add an extra value.
() – this type has only one value, so that value clearly is the least defined. Thus (⊥ :: ()) can be defined safely to be ().
Sum types – any value we chose in (A + B) must carry a tag to determine which of type A or B it’s in, and so cannot be the least defined value – if we chose a value in A, it carries enough information to say it’s not in B, and vice-versa. Thus for sum types, we must add an extra ⊥.
Product types – assuming that we have bottom values in types A and B, we can define ⊥ for (A × B) as being (⊥ :: A, ⊥ :: B).

One can clearly make at least two choices here, either the choice Haskell makes – add bottom values everywhere, or add bottom values only where they are needed. One could argue convincingly that what Haskell does is very consistent and predictable, but interestingly, the other choice has some nice results.

The functor laws demand that `fmap id x = x`. A special case of this is that `fmap id ⊥ = ⊥`. Lets look at this for pairs – that means that `fmap id undefined = undefined`, but this isn’t as lazy as we could be – we’d like to be able to return a tuple straight away, without looking at any of the tuple we’re given.

If however we chose to only add a bottom value to a type when needed, then bottom for tuples really is (⊥, ⊥), and we’re able to define fmap for tuples as `fmap f x = (fst x, f \$ snd x)` and not break the Functor laws.

This entry’s going to be relatively short. I wanted to point out something that I find very odd.

In the software industry, we strive to eliminate code-duplication. Duplicated code is duplicated effort, duplicated errors, and duplicated complexity. Yet I hear the same thing cropping up over and over again in many different projects – “I’m not going to use xyz, because it adds an extra dependancy”.

Are dependancies so bad that we can’t stand to see just one extra one in the aid of reducing code duplication?

My feeling is that the reason that people don’t want to do this is that the dependancies are often pretty heavy weight things. What they’re really complaining about is that “xyz” covers too much, and that they only want to use a small part of it.

In the interests of increasing code reuse then, I’d like to suggest to everyone one simple task. Release packages, release many packages, release small packages. Small packages are great, because people won’t see them as great monolithic blocks that they don’t want to depend on. Bottom line: if a package can be split into two in a reasonable way, do it! Oh, and one more thing: for the love of god, do it in an open way!

# How you should(n’t) use Monad

I’m often surprised by the fact that many people see monads as the be all and end all of Haskell’s abstraction techniques. Beginners often struggle over them thinking that they’re something incredibly important and complex. In reality though, monads are neither important, nor complex, and that’s what I aim to show with this little tutorial about what other abstraction techniques you can use, and when they’re appropriate.

## First things first – IO

The reason a lot of people come across monads is that they want to get on with interacting with the outside world – they want to do I/O. I’m going to get this out of the way quickly. In Haskell, IO is kept in a carefully constrained box, because unconstrained it violates referential transparency (that is, you can get multiple answers from the same call to the same function depending on what the user happens to type/how they wibble the mouse/etc). You can recognise functions in this box by their type signature – they involve a type that looks like this: `IO something`.

Here’s a few examples:

```-- Call this, and you get back a Char,
-- carefully wrapped up in an IO type.
getChar :: IO Char
-- This one doesn't have an interesting result,
-- only a unit value wrapped up in an IO type.
putStrLn :: String -> IO () ```

These can be stuck together with the handy do notation:

```main = do
name <- getLine
putStrLn ("Your name is " ++ name)```

There, that’s that out of the way – we can do IO now, and we didn’t need to understand anything nasty or complex.

## Some Abstraction techniques – Dealing with values in boxes

We often deal with values that are hidden inside other types. For example, we use lists, to hide values in, we use Maybes etc. What would be nice, is if we could apply a function to the values hiding in those boxes. Early in learning Haskell I’m sure you will have met a function that can do this for lists: map. This can be generalised though: Enter the Functor. Functors can do one thing, and one thing only, they can apply a function to a value inside an outer construction. They do this with the function fmap (or Functor map). Lets look at some examples:

For Lists, fmap is simply map:

```> fmap (+1) [1,2,3,4]
[2,3,4,5]```

For Maybe values fmap lets us apply a function to the value inside a just:

```> fmap (+1) (Just 1)
Just 2
> fmap (+1) Nothing
Nothing```

For Tuples, fmap lets us apply a function to the second half of the tuple (if we import an extra module that defines it):

```> import Control.Applicative
> fmap (+1) (1,2)
(1,3)```

We can use fmap to target a function into several layers of boxes by composing applications:

```> (fmap . fmap) (+1) [(1,2), (3,4)]
[(1,3),(3,5)]```

Here, the first fmap pushes (fmap (+1)) inside the list, and the second fmap pushes (+1) inside the tuples.

## Putting things in boxes

All that isn’t very useful if we can’t actually put something in a box in the first place. This is where the pure function comes in handy. This function lets us wrap anything we like in a box.

```> pure 1 :: [Int]
[1]
> pure 1 :: Maybe Int
Just 1
> pure 1 :: Either String Int
Right 1```

In Haskell, the pure function is in the Applicative class (you’ll need to import Control.Applicative). The Applicative class does some other interesting things as we’ll see in a minute. Because of this it would be nice if pure were separated into its own little class all on its own, but unfortunately that isn’t the way it is in Haskell (at the moment at least).

## Boxes everywhere

So, we’ve seen how to put something in a box, and we’ve seen how to apply a function to a value in a box, but what if our function is in a box too? At this point, the Applicative class really comes into its own. The (<*>) function from Applicative lets us apply boxes to each other as long as they have the right types of values inside.

```> (Just (+1)) <*> (Just 1)
Just 2
> [(+1), (*2), (^3)] <*> [1,2,3]
[2,3,4,2,4,6,1,8,27]```

That second result is not entirely clear – what’s going on? Well, the (<*>) function has applied each function to each argument in turn, and bundled up all the results in one list. (+1) gets applied to each argument, generating the results 2,3 and 4; (*2) gets applied to each argument, generating the results 2,4 and 6; and finally (^3) gets applied to each argument, generating the results 1,8 and 27.

An important note: All Applicatives are also Functors. You can implement fmap for any Applicative like this:

`fmap f a = (pure f) <*> a`

Applicative in fact does this for you, but calls the function (<\$>).

## Functions that produce boxes

When we have functions that produce values that are hidden inside boxes, we have a problem. Each time we apply the function we get an additional layer of boxes, this isn’t particularly pretty, nor composable. This is where monads come in. Monads add a single function called join, which is used to flatten out the layers of boxes:

```> join (Just (Just 2))
Just 2
> join [[1],[2],[3,4,5]]
[1,2,3,4,5]```

The join function lets us compose our box-producing functions more easily. We now fmap our box producing function over values in a box. This results in a 2-layer set of boxes. We can then use join to squash that back down again. This pattern is so useful that we call it “bind” or (=<<).

`f =<< a = join (f <\$> a)`

Some people like to define this the other way round:

`a >>= f = join (f <\$> a)`

This allows a very imperative style of programming where we ask the language to take the result of one computation, push it through a function, take the results, push them through another function, etc.

As with Applicatives and Functors, all Monads are Applicatives. We can define the (<*>) function using only bits of a Monad and the pure function:

`f <*> a = f >>= (\fv -> a >>= (pure . fv))`

We can see here a common pattern with monadic programming. We bind a function returning a monadic value into a lambda. Haskell provides a syntactic sugar for doing this called do notation:

```f <*> a = do
fv <- f
av <- a
pure (fv av)```

We can now see clearly that IO in Haskell is not using any magic at all to introduce an imperative concept to a functional language, instead the IO type is simply a monad. Remember, this means that it’s a functor and an applicative too, so we can use <\$> and <*> wherever we please in IO code to apply functions to IO values.

## Why you don’t always want to go for Monads

As we’ve seen, Monad sits atop a set of classes proudly the most powerful of all, but that doesn’t mean we want to use it all the time. As we’ve seen, Monad gives us a very imperative feel to our code – it reveals an order that isn’t necessarily there. Do notation particularly seems to suggest (in our example above) that we should first take the value out of f, then take the value out of a, and then apply the two. In reality, this order is not there, the Haskell runtime is free to evaluate these in any order it likes. This can make such language constructs dangerous. Firstly, we’re functional programmers because we like describing what things “are”, not what steps you should take to produce them. Secondly, the steps we seem to give here, are not the ones that the run time will really take in the end.

Lets look at an example of when we really shouldn’t use Monads. We have excellent Parser combinators in the Parsec library. These let us define small parsers, and stick them together using the Monadic interface. Lets define a small parser to parse an Int that may or may not be in a string:

```parseInt = do
ds <- many1 digit

parseMaybe p =
(do n <- p
pure (Just n))
<|> (pure Nothing)```

We are expressing an ordering that we don’t intend to – first we accept at least one digit into ds, then we read them and rewrap them in a parser. In parseMaybe first we parse something, and take the value out into n, then we wrap it in Just, and give it back.

This isn’t clear. Why couldn’t we just describe the grammar? Why do we have to specify an order? Lets patch parsec to provide an Applicative instance:

```instance Applicative (GenParser a st) where
(<*>) = ap
pure = return```

Note that I’m using a shorter version of the definition of (<*>) in a monad using the ap function. Now we may use the applicative functor interface:

```parseInt = read <\$> many1 digit

parseMaybe p = (Just <\$> p) <|> (pure Nothing)```

Not only are these definitions shorter, but we can quickly and easily see their meanings – an integer is many digits, with read applied to get them into a form we can use in Haskell.

Lets look at a more complex example:

```data Record = R { firstCol, secondCol :: Maybe Int }
deriving (Show)

parseRecord = do
col1 <- parseMaybe parseInt
char ','
col2 <- parseMaybe parseInt
char '\n'
pure (R col1 col2)```

Again, we’re specifying an order we don’t want to see. Lets look at this in applicative style:

```parseRecord =
R <\$> (parseMaybe parseInt <* char ',')
<*> (parseMaybe parseInt <* char '\n')```

Note the use of the (<*) function – this simply takes the value from the left hand parser, and passes it up, ignoring the value returned by the right hand parser. We can now see that parseRecord constructs a record from two maybe Ints, separated with a comma. We haven’t introduced any orderings that we don’t need to, and we’ve even condensed our code a little.

## Conclusions

We’ve seen the hierarchy of classes in Haskell in all its glory, rather than focusing unduly on the Monad, we’ve seen that the Monad interface, while powerful, is not always desirable. Hopefully we’ve seen that a lot of our monadic code can be cleaned up to use the Applicative (or maybe even Functor) interface instead.

Last night in #haskell, there was some interesting discussion going on, in which we discovered a class of data structures that seems to fit in between Applicative and Monad. A foreword though: Most of the credit for this article should go to Brent Yorgey, Cale Gibbard, Peaker and Tristan Seligmann – the article is the product of a discussion with them last night in which they did most of the work.

### Why doesn’t Applicative cut the mustard

One of the major drawbacks of Applicative is that you can’t make choices without evaluating too much. If we attempt to use `liftA3 if'`, we find that we evaluate both branches, no matter what we get back from our predicate.

### So just use Monad, right?

Well no, bind/join give us too much power, they let us make choices between an infinite number of functions, we only want the ability to make a choice between a finite number of branches.

We can define this class then:

```class Applicative b => Branchy b where
ifB :: b Bool -> b a -> b a -> b a
```

We can add a pair of laws to this that force this to really be an if like operation, and to make sure that we get lazyness properly:

```ifB (pure True ) x y = x
ifB (pure False) x y = y```

These laws though may not be complete in getting us to a useful class. Cale has suggested this additional law to get us closer, but holes in the laws are a definite place to do some work hashing this out.

`f <*> ifB b x y = ifB b (f <*> x) (f <*> y)`

So then, do we actually have something useful here? For that, we would have to find two things – an Applicative which is not a Branchy (to show that we’ve not just reinvented applicatives), and a Branchy that’s not a Monad. Well, clearly `Const a` is not a Branchy, a `Const a Bool` does not actually contain a Bool with which to decide which branch to take. And, here’s the useful Branchy that’s not a monad:

```instance Branchy ZipList where
ifB (ZipList ps) (ZipList xs) (ZipList ys)
= ZipList \$ zipWith3 if' ps xs ys
```

Andrea Vezzosi commented this morning that the interface would be rather nicer if rather than using `ifB`, we used `eitherB :: b (Either c d) -> b (c -> e) -> b (d -> e) -> b e`. While each can be implemented in terms of the other, `eitherB` allows for a little more efficiency. The following implementation of `eitherB` evaluates the either twice:
```eitherB fe fl fr = ifB (isLeft       <\$> fe)