It’s been a Haskell programming rule of mine for a long time that explicit recursion is an admission of failure. First, linear recursions have already been captured in one of the Prelude (or Data.List) functions. Second, recursions or iterations over a data structure are almost always either `fmap` or one of the Data.Foldable functions. Third, the list monad (and list comprehensions) capture a common recursion pattern, one found especially in path-finding, alpha-beta AI, and other try-every-path problem spaces.

If you’re recursing, and lists are involved, the recursion pattern has already been captured in a Prelude or Data.List function. `map`, the `fold*`s, the `scan*`s, `filter`, `find`, `unfoldr` and `group(By)` cover, I think, every loop I’ve ever written in an imperative language.

The dynamic “scripting” languages like Perl and Python have had `foreach` loops for years, to cover the most common case of looping over every element of an array. Java 1.6 and C# have this too. That’s a handy bit of syntactic sugar, but it is only a weak replacement for map, and doesn’t help at all for any of the others.

Programmers new to FP, including me when I was learning Haskell, often find it hard to think in terms of folds and maps. So go ahead, write the explicitly recursive version. But when you’re done, replace it with the appropriate library function. In time, you won’t need to write it down any more and will think of what you *would* write, recognize it as one of these higher-order functions, and write that down instead. Eventually, you’ll think in the folds and maps directly.

What about when you’re not working on lists? Take another common structure, trees. Or any other data structure, really. Consider this binary tree implementation:

data Tree a = Empty | Node a (Tree a) (Tree a)

It’s fairly obvious that we could map a function over the tree. This common pattern is represented by the typeclass `Functor`. `Functor`s represent “things which can be mapped over”, or containers to whose contents we might like to apply a function. Note that any `Monad` is also a `Functor`, though Haskell 98 does not enforce this. Some existing, handy `Functor`s: `Maybe`, lists, functions `((->) a)`, tuples `(a,)`, `Either a`, `Map k`.

The `Functor` typeclass contains a single function, `fmap :: (Functor f) => (a -> b) -> f a -> f b`. It turns an ordinary function into one “inside the functor”. For our tree:

instance Functor Tree where fmap _ Empty = Empty fmap f (Node x l r) = Node (f x) (fmap f l) (fmap f r)

It’s similar in spirit to `map` over lists, and in fact `fmap = map` for lists.

Writing the instance for the `Functor` or `Foldable` type classes, which you only do once, has huge maintenance and conciseness advantages over writing the explicit recursion in even two places. That would be true of a `treeMap` function for a tree that isn’t a `Functor`. The advantage of the typeclasses is that they provide a standard interface common to many libraries.

Finally, many seemingly complex recursion patterns, especially in the path-finding and alpha-beta AI spaces can be written in a list comprehension, or (if you prefer) in list monad do-notation.

[...] only one example. Mu can be applied to any Functor. Using the binary tree implementation from a previous post [...]

The title of your blog post reminded me of an entry on Fail Blog:

http://failblog.org/2008/01/31/recursive-fail/

ps. Blog Entry Success: your posting was interesting, btw! :)

That’s a handy bit of syntactic sugar, but it is only a weak replacement for map, and doesn’t help at all for any of the others.It’s perhaps worth pointing out that Perl and Python (and, I think, Ruby) also have a map function. Python’s is rather more general than Haskell’s, and covers all the n-ary zipWith variants.

Also, I believe it’s a theorem that every loop can be expressed as a fold – IIRC, fold has a universal property that ensures this. Whether the fold is clearer than the explicitly recursive version is another question, of course :-)

I’m glad you didn’t suggest people strive to write in terms of maps or folds right off the bat. I remember writing lots of code that were doing folds, but it was only after I had written them and refactored them that I could put them into folds. So I just wanted to compliment you on the article and the advice. I believe it is sound, and it provides a low-grade slope for people to move up toward higher order functions. It worked for me.

perldoc -f map

> It’s been a Haskell programming rule of mine for a long time that explicit recursion is an admission of failure.

This is an interesting point. I can appreciate the code minimization benefits of this approach, but it seems like you are implying (without saying it) that there are some performance reasons why higher order functions are better choices than explicit recursions.

If this is true (and correct me if I misunderstood), I’d be interested to know a little more about this. Are there any references that talk about this from a performance perspective ? This is the first time I’ve come across the idea of higher order functions out performing explicit recursions.

I come from a java-shop background, and one of the things stressed there is code-readability. I’ve always found that mentally digesting folds in someone else is code is tricky compared with explicit recursions (especially for newbies like you said),

so if there wasn’t a performance benefit in writing recursions with folds, wouldn’t the readability trump the code minimization ?

Interested to hear your comments. Thanks,

Rick

python does have a map() , filter() and apply() functions, and you can use then instead of recursion, too

It’s not clear to me how alpha-beta pruning can be expressed as a fold. Could you comment more about this? Thank you!

@pozorvlak, sean, eddie:

I’m well-aware that Perl, Python and similar have a fully general map (and filter/grep and fold/reduce/etc) function. Even more so, Perl has full-strength lexical closures. It’s pretty close to an acceptable functional language (indeed, see Higher Order Perl). Unfortunately the function call overhead (due mostly to list-flattening) makes it rather a slow one. (Does Python have closures (not just lambdas)? I don’t know it as well as I do Perl. Ruby?)

@ pozorvlak:

Any loop, or equivalently, any linear recursion, can be written using foldr. Since, as I say, the various standard HOFs cover most linear recursion patterns, see these redefinitions in terms of foldr:

map f = foldr (\x xs -> f x : xs ) []

filter f = foldr (\x xs -> if p x then x:xs else xs) []

length = foldr (\_ n -> n+1) 0

@ Rick:

I’m not mainly concerned with performance, though of course an optimizing compiler such as GHC has to work a lot less hard to know what map means than to analyze a custom linear recursion, and so is more likely to optimize a map, and more effective at doing so.

I feel that the crux of the article was in fact code readability, both in the eye of the programmer and the reader. I came from an imperative background too, and so as I said in the main article, I found it hard to reason in terms of higher-order functions when I first came to Haskell. When I was reading someone else’s code, I always had to pause and think about what foldr or map meant. Now that I’ve become more accustomed to Haskell, I don’t hesitate anymore: they are perfectly natural.

If this seems to demand more of the programmer than explicit recursion, I understand the point. But to someone who doesn’t know English very well, reading this comment would be difficult. To a fluent speaker, its meaning is plain. I used several English idioms (crux of the article, accustomed to something, to see a point), and while I could have written them more explicitly, it would come at the cost of making the post harder to read for one fluent in English. So I believe it to be with the use of higher-order functions compared to explicit recursion. If someone writes a map explicitly, the reader needs to double-check that the stop condition is the same, that the recursion pattern really is that of map. “map” guarantees all of those things.

@ DeepBlue

It can’t. Try-every-path exhaustive searches are described using the list monad. As an example, finding Pythagorean triples with a given perimeter.

triples n = [ (a,b, c) | a <- [1..n], b <- [1..a], let c = n – a – b, a^2 + b^2 == c^2 ]

This is a list comprehension, which is syntactic sugar for the list monad.

In the case of an alpha-beta AI, I now know that you can’t quite use a list comprehension, as they model an exhaustive search. One that short-circuits certain paths when they become too unfavourable can be defined using a similar monad, though obviously the syntax won’t be as short as list comprehensions, since you also need to specify the cutoff for an unfavourable result, a possible short-circuiting if the best possible value has already been seen, etc. The code here gives a working alpha-beta and short-circuiting model.

dons, how would you abstract out a data structure traversal where at each step we make a decision where to go next? (For lists, you have two choices: stop, or next element. For binary trees, you can go to either child or stop. Etc.)

I’m definitely not dons. His blog is here.

As to how to traverse a tree or other nonlinear structure, there are two answers. The first is that the Functor instance decides on the order. The other is that it usually doesn’t matter, due to purity. If the traversal has no side effects, the order of evaluation doesn’t matter. Whole parts of it (up to and including the entire traversal) may in fact never be executed, or may be optimized away.

And so, the standard instance of Functor for a binary tree like mine is:

`instance Functor Tree where`

fmap _ Empty = Empty

fmap f (Node a l r) = Node (f a) (fmap f l) (fmap f r)

which appears

a priorito implement a preorder traversal. On a second look, though, since Haskell is non-strict, no particular order is guaranteed. However, the order is guaranteed to be fixed, since purity means it will have the same result every time you call it.Python _doesn’t_, however, allow multi-line anonymous functions.