July 26, 2008

I’ve been an xmonad user since November, and a contrib developer since December. I love it, and really miss it when I have to use Windows or a friend’s Linux machine. I’ve introduced several classmates to it, and the ones who use Linux regularly use xmonad as well.

One of the best features of xmonad is that the configuration file is a Haskell source file. This allows for great extensibility, flexibility and control of one’s setup.

One of the great weaknesses of xmonad is that the configuration file is a Haskell source file. This means that a user must have a working GHC 6.8 (does 6.6 still work?) installed in order to change their xmonad configuration.

For someone already involved in Haskell, this is not a problem, of course. For someone looking for a good tiling window manager, GHC is a dependency of the heaviest sort. It totals some 150MB of disk on Ubuntu Hardy, for example.

To most people that’s nothing as far as disk space itself goes; gigabytes of storage are cheap and plentiful, even on laptops. Except for the new breed of EEE PCs and similar! The ritziest models of these have 8GB, most have 4GB. Now GHC is starting to be actually expensive.

Above and beyond the actual disk space cost is the psychological cost of a big dependency. It hardly fits the presentation of xmonad as a minimalist window manager to require the installation a complete compiler toolchain.

There are blog posts and mailing list posts about the problem, and a few days ago a user on #xmonad balked at the problem, though the combined weight of Don Stewart, Spencer Janssen and I persuaded him to try it anyway and he seems well pleased with the investment of space.

That last one really drives home the point: GHC is an initial hurdle only; once a user has discovered how awesome xmonad is, they will gladly sacrifice the space for it.

Strictly speaking, xmonad doesn’t require GHC, you can run GHC’s binaries without the compiler present. For xmonad, however, this is extremely limiting, as a user cannot even change the mod key without recompiling their xmonad.hs and therefore requiring GHC.

And so we are left with a conundrum. Certainly we do not wish to sacrifice the flexibility and power of using Haskell as a configuration language. So the solution, if any, lies in improving the situation for the user who does not have GHC. Since he current has no options at all, any alternatives are an improvement. I have three ideas for consideration. All are discussed in detail below.

1. An archive of precompiled configuration files alongside the config archive on the wiki.
3. A supplementary plain-text config file, that a precompiled xmonad.hs could parse, generating an `XConfig` and passing that to the `xmonad` function.

1. Config Archive
This solution requires the least effort, just a bit of web hosting space. It’s also the most limiting, since users can’t mix and match, just select from the existing models. Better than nothing, but not by much. I don’t have much more to say about it, I support the below ideas more.

2. Web Service
This solution is the most flexible, allowing users to have their cake and eat it too: the full power of Haskell configurations without the cost of GHC. This would be the approach I would support, except that there are serious challenges here. Template Haskell allows arbitrary execution at compile-time, there’s an infinite loop in the typechecker. There might be other bugs to allow arbitrary compile-time execution even if we could disable TH. Finally, we need a box to run this on, and no one volunteered.

3. `PlainConfig`
This option is actually implemented already, at least in proof-of-concept form. I wrote `XMonad.Config.PlainConfig` in April to demonstrate that this idea could work, and could really run with a precompiled xmonad.hs and without GHC. It works, though of course it is far more limiting than a Haskell configuration. Key bindings are done using `EZConfig`, and drawing from a list of predefined actions. Layouts are selected from a predefined list as well. Workspaces have arbitrary labels, and as many as you like (though no list comprehensions for the new key bindings!). Mod and NumLock keys can be set freely, as can the colours, border width, terminal and focus-follows-mouse. `ManageHook`s use a parser from Roman Cheplyaka to handle arbitrary `resource`, `className`, and `title` queries, and the two standard actions `doFloat` and `doIgnore`. `doShift` wasn’t in core when I wrote `PlainConfig`, but it is very easy to add.

Similarly, it would be easy to add more layouts and keybinding actions, they are just entries in a `Data.Map String Whatever`. The two most common configuration elements that are missing are `logHook`s for status bars and `layoutHook` modifiers like `avoidStruts`. The latter is straightforward to add (there are only a few of them, after all), and the former is only slightly more difficult, though it doesn’t allow a lot of flexibility.

The Kicker

From the relative volume of those three possiblities, it should be fairly clear which I think is the most practical approach to lightweight xmonad configuration. `PlainConfig` works, and was designed to be easily extended to add more contrib modules (`ManageDocks`, `EwmhDesktops`, layouts), more keybinding actions (`CopyWindow`, `Prompt`).

The next key piece of the puzzle is the idea of nightly builds. It’s been a long time since the 0.7 release, and contrib has been zipping ahead as ever. Not everyone wants to install darcs, check out xmonad, and build xmonad from the darcs source just to get at the new modules and changes. A system of nightly builds would allow these users to stay up-to-date and take advantage of the active development, without needing darcs or indeed, GHC.

It would be straightforward to provide two distinct builds every night. The first would be xmonad as packaged presently for Debian, Arch et al., using the stock config and with no xmonad.hs. Alongside this, however, could be a version which includes the prebuilt xmonad.hs that parses the user’s xmonad.conf using `PlainConfig`.

And the Gateway Drug

Inevitably, many `PlainConfig` users are going to want something it can’t provide, and can’t easily be modified to provide. To avoid making this a dilemma between improved configuration power and having to rewrite their existing config into an xmonad.hs, I plan to write a “cross-compiler” from xmonad.conf files to an equivalent xmonad.hs which can be a drop-in replacement.

Concerning Evangelism

During discussion of the web compilation idea on the mailing list, there was a rather emphatic post cursing the drive in others for evangelising open source projects, since it just invites a load of clueless newbies who were “attracted to the *marketing* instead of the functionality”. I certainly try to avoid being a zealot, and readily suggest to users taking exception to fundamental aspects of xmonad that they consider one of the alternative WMs. On the other hand, more users for xmonad means more potential contributors, more people checking bugs, new sources of ideas (whether they can code them personally or not) and more buzz for Haskell. I consider all of these worthy goals, and so if adding a feature like GHC-less configuration to xmonad makes me an evangelist, I will bear that label gladly.

In the spirit of “patches welcome”, I plan to work on the “cross-compiler” and nightly builds, both normal and `PlainConfig`-ready, in the next week. I will donate my own desktop to the cause as a build box until an alternative can be found. I hope that once this system is in place, we can manage to put out the word that xmonad doesn’t require GHC anymore loudly enough to overcome the inertia. Perhaps more importantly, to kill the untruth that one must learn Haskell to use xmonad! There are unfortunately numerous blog posts out there that compare tiling WMs and cite one or both as a disadvantage of xmonad. The latter is particularly insidious, and though xmonad users have said in many places that this was simply not true, that far more users have zero Haskell knowledge than have some, it persists.

Of course, the reverse rumour, that xmonad has crippled their configuration, that Haskell isn’t a good language for configuration, will need a strong pre-emptive strike. It needs to be blindingly obvious that both options for configuration are still supported!

## Fixed Point Datatypes

July 7, 2008

This is more Haskell translation of my reading of “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire”, this time concerning fixed points of data types.

First, what is a fixed point (aka. fixpoint)? In calculus, functions can have fixed points. These are values $x$ such that $f(x) = x$. As an example, $f(x) = x^2$ has a fixed point at both 0 and 1. For $f(x) = x$, every $x$ is a fixpoint!

The notion of fixpoints in programming is somewhat different, though related. In programming fixed points are about self-application, rather than the calculus definition above. I hope the distinction will become clear with some examples.

First, consider the following definition of a simple pair-like type:

`data Pair a b = Pair a b`

I can use any type I like for a and b, including, say, Pair a b:

`example1 = Pair 1 (Pair 2 'a')`

I could go further:

`example2 = Pair 1 (Pair 2 (Pair 3 (Pair 4 (Pair 5 'a'))))`

Is that starting to look familiar? This is an approximation of lists, though not exactly a pleasant one to work with.
Note that Pair a is a Functor. This idea of applying a Functor to itself is exactly fixpoints of data types. With the Pair a Functor, there is nothing to stop the self-application, so we have something isomorphic to an infinite [a].
Augmenting Pair to include a second, nullary, constructor, thus:

`data Pair a b = Stop | Pair a b`

We can then take the fixpoint of the Functor Pair a, which is isomorphic to:

`data PairFix a = Stop | Pair a (PairFix a)`

And that is clearly isomorphic to desugared lists:

`data List a = Empty | Cons a (List a)`

We can even abstract this notion of self-application of a Functor in a curious newtype:

`newtype Functor f => Mu f = In (f (Mu f))`

Such that PairFix a = Mu (Pair a), assuming we’ve declared an instance of Functor for Pair a.

Lists are actually only one example. Mu can be applied to any Functor. Using the binary tree implementation from a previous post ,

`data Tree a = Empty | Tree a (Tree a) (Tree a)`

This is the fixpoint of the following type:

`data Cell a b = Empty | Cell a b b`

Since Cell a forms a Functor, Mu (Cell a) == Tree a.

This is all merely an intriguing bit of type theory until we make the realization that all of these structures have something in common. That something is that they can all be recursed over. This can be seen most clearly by looking at lists, which as I demonstrated above are isomorphic to Mu (Pair a) using the second definition of Pair. The main thrust of Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire is that recursion patterns (ie. higher-order functions) we generally think of as being for lists are in fact applicable to any fixpoint data structure.

The paper goes on to describe four core “foomorphisms”: catamorphisms, anamorphisms, hylomorphisms and paramorphisms. The names are from Greek, and like “monad” and “functor” are far scarier than the things they name. These are generalizations of foldr, unfoldr, primitive recursion and primitive recursion with an accumulating parameter, respectively.

I had originally planned to describe these in more detail here, but Edward Kmett (edwardk) has begun writing a Field Guide to all of these recursion schemes. His posts on each of the foomorphisms include both category theory and concrete examples, and are an excellent read. As you can see from the front page of his Field Guide, there are many, many more than the four described in the original paper. Implementations of all of these and much more are available in category-extras on Hackage.

I will still describe some of the core examples and definitions from the paper in later posts. In particular my explorations of the standard examples (Peano numbers, factorial, trees) from Edward Kmett’s Guide and elsewhere. My goal is to learn to recognize them well “in the wild”, since they aid understanding even when the function doesn’t explicitly use the foomorphism.

## Prototypes Better Than Production

June 4, 2008

As a take-home technical test as part of a job interview process, I was asked to write a function to convert a number into a string containing its representation in Roman numerals. This is an interesting little puzzle and worth a try, so consider writing or pondering your own solution before reading mine.

The precise specification follows:

• “M” = 1000 “D” = 500 “C” = 100 “L” = 50 “X” = 10 “V” = 5 “I” = 1
• To find the value of a set of roman numerals you add up the value of the characters.
• A power of ten can only be repeated three times i.e., XXX = 30, XXXX is not valid.
• Those that are not powers of ten can only appear once, i.e. VV is not valid.
• The numbers must read highest-lowest from the left to the right.
• If a letter of a smaller value appears before a number of a higher value, then the smaller number is to be subtracted from the higher value. ex: IX = 9.
• You can subtract only powers of ten i.e., I, X, C
• Only one character can be used to subtract from a larger character. eg IIX = 8 is not allowed.
• You can’t subtract a number from one that is more than 10 times greater. That is, you can only subtract I from V or X, X from L or C, etc. For e.g., IC can not be used for 99. It must be XCIX.

I began writing a solution in Java, but then found that the mechanics of writing it in Java were obscuring my thinking about the algorithm. I moved to Haskell to prototype my solution. This proved an effective move, allowing me to stop worrying about whether I’m using a String or StringBuffer, and to allow the use of tuples instead of separate classes.

```module Main where
import System.IO

-- one element per order of magnitude, with the ones value,
--  and the Roman numeral for 1, 5 and 9 on this order.
romans = [(1000, ("M", "" , ""  ))
,(100 , ("C", "D", "CM"))
,(10  , ("X", "L", "XC"))
,(1   , ("I", "V", "IX"))
]

-- utility function for converting at
-- each order of magnitude
multiples 0 _       = ""
multiples 9 (_,_,z) = z
multiples 4 (x,y,_) = x ++ y
multiples 5 (_,y,_) = y
multiples n (x,y,_) | n > 5     = y ++ concat (replicate (n-5) x)
| otherwise = concat \$ replicate n x

-- the main work: iterate through the orders of
-- magnitude, accumulating the result.
romanize x = fst \$ foldl step ("", x) romans

step (r,n) (v,s) = (r ++ multiples d s , m)
where (d,m) = n `divMod` v

-- print the (Arabic, Roman) pairs for 1..200
main = mapM_ (print . (\x -> (x , romanize x))) [1..200]```

This solution is based on a left fold, working over a list containing the Roman numeral info for each order of magnitude. In romans each list element is a pair containing the order of magnitude (descending order) and a triple with Strings with the Roman numeral representations of 1, 5, and 9 of that order.

I think the code is probably clearer than a further English description, really. I might be wrong, especially since I wrote it, but I’m going to stop narrating the code.

So that was my rapid prototyping in Haskell. It took a short time to write, a very short time to debug. It exposed a couple of little mistakes in my initial algorithm, which I corrected quickly. Exactly what rapid prototyping is supposed to give you.

Now comes the curious part. I rewrote the algorithm in Java. The tuples in romans became an inner class with those four values as member variables, that’s fairly natural. It had two member functions essentially implementing step and multiples. I think the Java code is fairly idiomatic, clear, efficient and generally good. It follows:

```import java.io.*;

public class RomanNumerals {

// for testing, print out (Arabic, Roman) for 1..200.
public static void main (String[] args) {
new RomanNumerals();
}

public RomanNumerals () {
for (int i = 1; i <= 200; i++ ) {
System.out.println("("+ i + ",\"" + toRoman(i) +"\")");
}
}

// one element per order of magnitude,
// recording the ones value, and the Roman
// numeral for multiples of 1, 5 and 9.
Power[] powers = { new Power (1000, "M", "", ""),
new Power (100 , "C", "D", "CM"),
new Power (10  , "X", "L", "XC"),
new Power (1   , "I", "V", "IX")};
// 0 < num <= 3999, the maximum by the rules
// (without a numeral for 5000)
private String toRoman (int num) {
StringBuffer buf = new StringBuffer("");
int power = 0;
// loop through the orders of magnitude,
// appending the representation for this order
// to the running Roman representation.
while ( num > 0 && power < powers.length ) {
buf.append( powers[power].convert(num) );
num = powers[power].remainder(num);
power++;
}

return buf.toString();
}

// container for an order of magnitude, knows how to
// find the representation given the running number.
class Power {
private String one, five, nine;
private int value;

public Power (int v, String o, String f, String n) {
value = v;
one   = o;
five  = f;
nine  = n;
}

// calculates the representation for this order
public String convert (int num){
num /= value; // truncate
// now a few special cases
if ( num == 0 )                return "";
if ( num == 9 )                return nine;
if ( num == 4 )                return one + five;
if ( num == 5 )                return five;

// handling of 1-3 and 6-8
String base = "";
if ( num > 5 ) {
num -= 5;
base = five;
}
// I could do this in a loop, appending.
// but this unrolled form is more efficient,
// and it's only 3 cases.
switch(num) {
case 1: return base + one;
case 2: return base + one + one;
case 3: return base + one + one + one;
default: return ""; // can't happen
}
}

// calculates the remainder, the lower-order
// number left after calculating this order.
public int remainder (int num){
return num % value;
}
} // class Power
} // class RomanNumerals```

But what did I gain by rewriting it in Java? I only did it because that’s what the potential employer needs (I’m sending the Haskell solution too, for fun). Removing that artificial constraint, was there any reason to?

Let’s compare the two implementations:

 Lines Characters Binary size (B) Runtime (s) Java 111 2406 1448+1170 0.177 Haskell (ghc) 34 947 434156 0.052 Haskell (ghc -O2) 34 947 393531 0.052

Runtimes are measuring the time required to print from 1 to 200, using time.

So by pretty much all of the usual metrics, the Haskell versions are better. They require fewer lines of code, fewer characters, less execution time. Their binaries are considerably heavier, but on the other hand the Java 6 runtime isn’t being counted here. The same source code will run on any platform supported by Java 6 Standard Edition and GHC respectively, though the Haskell binaries are platform-dependent.

Most vitally, the Haskell version is more type-safe, less likely to cause a runtime exception, is source-portable, executes faster and took less time to write (even though I wrote the Haskell from scratch, and translated into Java!).

The only advantage to the Java version that I can see is that my interviewer reads Java but not Haskell. But I consider that a loss for the company, not for Haskell.

## Recursion Fail

April 11, 2008

It’s been a Haskell programming rule of mine for a long time that explicit recursion is an admission of failure. First, linear recursions have already been captured in one of the Prelude (or Data.List) functions. Second, recursions or iterations over a data structure are almost always either fmap or one of the Data.Foldable functions. Third, the list monad (and list comprehensions) capture a common recursion pattern, one found especially in path-finding, alpha-beta AI, and other try-every-path problem spaces.

If you’re recursing, and lists are involved, the recursion pattern has already been captured in a Prelude or Data.List function. map, the fold*s, the scan*s, filter, find, unfoldr and group(By) cover, I think, every loop I’ve ever written in an imperative language.
The dynamic “scripting” languages like Perl and Python have had foreach loops for years, to cover the most common case of looping over every element of an array. Java 1.6 and C# have this too. That’s a handy bit of syntactic sugar, but it is only a weak replacement for map, and doesn’t help at all for any of the others.

Programmers new to FP, including me when I was learning Haskell, often find it hard to think in terms of folds and maps. So go ahead, write the explicitly recursive version. But when you’re done, replace it with the appropriate library function. In time, you won’t need to write it down any more and will think of what you would write, recognize it as one of these higher-order functions, and write that down instead. Eventually, you’ll think in the folds and maps directly.

What about when you’re not working on lists? Take another common structure, trees. Or any other data structure, really. Consider this binary tree implementation:

`data Tree a = Empty | Node a (Tree a) (Tree a)`

It’s fairly obvious that we could map a function over the tree. This common pattern is represented by the typeclass Functor. Functors represent “things which can be mapped over”, or containers to whose contents we might like to apply a function. Note that any Monad is also a Functor, though Haskell 98 does not enforce this. Some existing, handy Functors: Maybe, lists, functions ((->) a), tuples (a,), Either a, Map k.
The Functor typeclass contains a single function, fmap :: (Functor f) => (a -> b) -> f a -> f b. It turns an ordinary function into one “inside the functor”. For our tree:

```instance Functor Tree where
fmap _ Empty        = Empty
fmap f (Node x l r) = Node (f x) (fmap f l) (fmap f r)```

It’s similar in spirit to map over lists, and in fact fmap = map for lists.

Writing the instance for the Functor or Foldable type classes, which you only do once, has huge maintenance and conciseness advantages over writing the explicit recursion in even two places. That would be true of a treeMap function for a tree that isn’t a Functor. The advantage of the typeclasses is that they provide a standard interface common to many libraries.

Finally, many seemingly complex recursion patterns, especially in the path-finding and alpha-beta AI spaces can be written in a list comprehension, or (if you prefer) in list monad do-notation.

## Equational Reasoning Shopping List

April 10, 2008

Pure functional programming has the advantage that one can reason with it very like one does in mathematics. Specifically, if you know two expressions to be equal, they can be freely swapped. This underlies the use of inlining, sharing and the ability to replace any linear recursion with a fold, unfold, zip, etc. as appropriate.

As a first Haskell example, consider the following very-well-known optimization. Given code like

`map f \$ map g \$ map h list`

or in the more common style:

`map f . map g . map h \$ list`

it appears naively that this will traverse the list three times: once to apply h, again to apply g, and finally a third time to apply f.

But because map, f, g and h are all pure functions, they can be replaced with something equivalent without changing the result of the code.

Consider an element of list, say x. After the three maps, it will have been replaced in the final list with f(g(h x))), or equivalently f . g . h \$ x. So we should be able to replace the three maps with one that does the same thing:

`map (f . g . h) list`

And in fact GHC does precisely this.

Equational Rules
Following is a collection (and more are most welcome!) of rules for transforming your programs. I have written their equality with ==, though of course Haskell functions are not in Eq. I have written implication as –> and equivalence/double-implication as <–>.

```map f . map g == map (f . g)
filter f . filter g     == filter (\x -> f x && g x)
== filter (liftM2 (&&) f g)
f . f == id --> filter (p . f) == map f . filter p . map f
filter (all p) . cartesianProduct == cartesianProduct . map (filter p)
filter p . concat == concat . map (filter p) == concatMap (filter p)```

And now some concerning pairs and arrow combinators, Haskell translations of ones from Meijer et al.’s seminal paper:

```fst . (f *** g) == f . fst
fst . (f &&& g) == f
-- and the obvious equivalents for snd and g.
fst . h &&& snd . h == id
fst &&& snd == id
(f *** g) . (h &&& j) == (f . h) &&& (g . j)
(f &&& g) . h == (f . h) &&& (g . h)
(f *** g) == (h *** j) <--> f == h && g == j
(f &&& g) == (h &&& j) <--> f == h && g == j```

And more, translated from the same paper, for Either and the arrow combinators using it:

```(f +++ g) . Left == Left . f
(f ||| g) . Left == f
-- and the obvious equivalents for g and Right
{- h strict -} --> (h . Left) ||| (h . Right) == h
Left ||| Right == id
(f ||| g) . (h +++ j) == (f . h) ||| (g . j)
{- f strict -} --> f . (g ||| h) == (f . g) ||| (f . h)
(f ||| g) == (h ||| j)  f == h && g == j
(f +++ g) == (h +++ j)  f == h && g == j```

And finally the “Abides Law” tying the two together:

`(f &&& g) ||| (h &&& j) == (f ||| h) &&& (g ||| j)`

There are many more such rules, and if you have more to suggest I’d love to add them to the list.

## Genesis

April 9, 2008

I’ve accumulated a little backlog of Haskell thoughts looking for an outlet, and I’ve finally created this blog to hold them. Most of these will be papers or Haskell features that have coated the countryside in a fine film of my neurons.

Upcoming posts include:

• A translation of Meijer et al.’s famous “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire” paper into Haskell, for those (like me) whose abstract notation buffers aren’t up to reading the original.
• A collection (“shopping list”, to borrow a term) of equational rules for functional programming (e.g., foldr (:) [] == id). The goal is that you could look here for functions you’re using, and perhaps spot a rule that can simplify your code or improve your understanding. I’ve found these rules a great source of enlightening brain explosions,
• PlainConfig, my upcoming text file configuration module for xmonad, designed to offer some configuration without the GHC dependency.

Those are in the order they occurred to me; they might appear in any order.