开发者

Trying to get my head around recursion in Haskell?

I have used many recursive functions now but still have trouble getting my hea开发者_运维百科d around how such a function exactly works (i'm familiar with the second line (i.e. | n==0 = 1) but am not so familiar with the final line (i.e. | n>0 = fac (n-1) * n)).

fac :: Int -> Int
fac n
    | n==0 = 1
    | n>0  = fac (n-1) * n


Recursive algorithms are very closely linked to mathematical induction. Perhaps studying one will help you better understand the other.

You need to keep two key principles in mind when using recursion:

  • Base Case
  • Inductive Step

The Inductive Step is often the most difficult piece, because it assumes that everything it relies upon has already been computed correctly. Making this leap of faith can be difficult (at least it took me a while to get the hang of it), but it is only because we've got preconditions on our functions; those preconditions (in this case, that n is a non-negative integer) must be specified so that the inductive step and base case are always true.

The Base Case is also sometimes difficult: say, you know that the factorial N! is N * (N-1)!, but how exactly do you handle the first step on the ladder? (In this case, it is easy, define 0! := 1. This explicit definition provides you with a way to terminate the recursive application of your Inductive Step.)

You can see your type specification and guard patterns in this function are providing the preconditions that guarantee the Inductive Step can be used over and over again until it reaches the Base Case, n == 0. If the preconditions can't be met, recursive application of the Inductive Step would fail to reach the Base Case, and your computation would never terminate. (Well, it would when it runs out of memory. :)

One complicating factor, especially with functional programming languages, is the very strong desire to re-write all 'simple' recursive functions, as you have here, with variants that use Tail Calls or Tail Recursion.

Because this function calls itself, and then performs another operation on the result, you can build a call-chain like this:

fac 3        3 * fac 2
  fac 2      2 * fac 1
    fac 1    1 * fac 0
      fac 0  1
    fac 1    1
  fac 2      2
fac 3        6

That deep call stack takes up memory; but a compiler that notices that a function doesn't change any state after making a recursive call can optimize away the recursive calls. These kinds of functions typically pass along an accumulator argument. A fellow stacker has a very nice example: Tail Recursion in Haskell

factorial 1 c = c
factorial k c = factorial (k-1) (c*k)

This very complicated change :) means that the previous call chain is turned into this:

fac 3 1       fac 2 3
  fac 2 3     fac 1 6
    fac 1 6   6

(The nesting is there just for show; the runtime system wouldn't actually store details of the execution on the stack.)

This runs in constant memory, regardless of the value of n, and thus this optimization can convert 'impossible' algorithms into 'possible' algorithms. You'll see this kind of technique used extensively in functional programming, much as you'd see char * frequently in C programming or yield frequently in Ruby programming.


When you write | condition = expression it introduces a guard. The guards are tried in order from top to bottom until a true condition is found, and the corresponding expression is the result of your function.

This means that if n is zero, the result is 1, otherwise if n > 0 the result is fac (n-1) * n. If n is negative you get an incomplete pattern match error.

Once you've determined which expression to use, it's just a matter of substituting in the recursive calls to see what's going on.

fac 4
(fac 3) * 4
((fac 2) * 3) * 4
(((fac 1) * 2) * 3) * 4
((((fac 0) * 1) * 2) * 3) * 4
(((1 * 1) * 2) * 3) * 4
((1 * 2) * 3) * 4
(2 * 3) * 4
6 * 4
24


Especially for more complicated cases of recursion, the trick to save mental health is not to follow recursive calls, but just assume that they "do the right thing". E.g. in your fac example, you want to compute fac n. Imagine you already have the result fac (n-1). Then it's trivial to calculate fac n: just multiply it by n. But the magic of induction is that this reasonig actually works (as long as you provide a proper base case in order to terminate recursion). So e.g. for Fibonacci numbers, just look what the base case is, and assume that you are able to calculate the function for all numbers smaller then n:

fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

See? You want to calculate fib n. It's easy if you would know fib (n-1) and fib (n-2). But you can simply assume you are able to calculate them, and that the "deeper levels" of recursion do the "right thing". So just use them, it will work.

Note that there are much better ways to write this function, as currently many values are recalculated very often.

BTW: The "best" way to write fac would be fac n = product [1..n].


whats throwing you? maybe the guards (the |) are confusing things.

You can think of the guards loosely as a chain of ifs, or a switch statement (difference being only one can run, and it directly evaluates to a result. does NOt perform a series of tasks, and certainly no side-effects. just evaluates to a value)

To pan imperative-like seudo-code....

Fac n:
   if n == 0: return 1
   if n > 0: return n * (result of calling fac w/ n decreased by one)

The call tree by other poster looks like it could be helpful. do yourself a favor and really walk through it

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜