开发者

Tail Recursion in Haskell

I am trying to understand tail-recursion in Haskell. I think I understand what it is and how it works but I'd like to make sure I am not messing things up.

Here is the "standard" factorial definition:开发者_开发知识库

factorial 1 = 1
factorial k = k * factorial (k-1)

When running, for example, factorial 3, my function will call itself 3 times(give it or take). This might pose a problem if I wanted to calculate factorial 99999999 as I could have a stack overflow. After I get to factorial 1 = 1 I will have to "come back" in stack and multiply all the values, so i have 6 operations (3 for calling the function itself and 3 for multiplying the values).

Now I present you another possible factorial implementation:

factorial 1 c = c
factorial k c = factorial (k-1) (c*k)

This one is recursive, too. It will call itself 3 times. But it doesn't have the problem of then still having to "come back" to calculate the multiplications of all the results, as I am passing already the result as argument of the function.

This is, for what I've understood, what Tail Recursion is about. Now, it seems a bit better than the first, but you can still have stack overflows as easily. I've heard that Haskell's compiler will convert Tail-Recursive functions into for loops behind the scenes. I guess that is the reason why it pays off to do tail recursive functions?

If that is the reason then there is absolutely no need to try to make functions tail recursive if the compiler is not going to do this smart trick -- am I right? For example, although in theory the C# compiler could detect and convert tail recursive functions to loops, I know (at least is what I've heard) that currently it doesn't do that. So there is absolutely no point in nowadays making the functions tail recursive. Is that it?


There are two issues here. One is tail recursion in general, and the other is how Haskell handles things.

Regarding tail recursion, you seem to have the definition correct. The useful part is, because only the final result of each recursive call is needed, earlier calls don't need to be kept on the stack. Instead of "calling itself" the function does something closer to "replacing" itself, which ends up pretty much looking like an iterative loop. This is a pretty straightforward optimization that decent compilers will generally provide.

The second issue is lazy evaluation. Because Haskell only evaluates expression on an as-needed basis, by default tail recursion doesn't quite work the usual way. Instead of replacing each call as it goes, it builds up a huge nested pile of "thunks", that is, expressions whose value hasn't been requested yet. If this thunk pile gets big enough, it will indeed produce a stack overflow.

There are actually two solutions in Haskell, depending on what you need to do:

  • If the result consists of nested data constructors--like producing a list--then you want to avoid tail recursion; instead put the recursion in one of the constructor fields. This will let the result also be lazy and won't cause stack overflows.

  • If the result consists of a single value, you want to evaluate it strictly, so that each step of the recursion is forced as soon as the final value is needed. This gives the usual pseudo-iteration you'd expect from tail recursion.

Also, keep in mind that GHC is pretty darn clever and, if you compile with optimizations, it will often spot places where evaluation should be strict and take care of it for you. This won't work in GHCi, though.


You should use the built-in mechanisms, then you don't have to think about ways to make your function tail-recursive

fac 0 = 1
fac n = product [1..n]

Or if product weren't already defined:

fac n = foldl' (*) 1 [1..n]

(see http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27 about which fold... version to use)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜