Is nested recursion possible or should we avoid recursion ?
I came across a question like this
- F(1) = 1
- F(2n) = F(n)
- F(2n+1) = F(n) + F(n+1)
Develop a recursive program to compute F
Some user mentioned using two recursive function calls:
def calc(n):
if n=1 :
return 1
else if(n%2)==0:
return calc(n/2)
else :
return calc(n/2)+calc(n/2+1) **NESTED RECURSION**
Is this a correct logic? Wouldn't the algorithm be exponentially large? I thought of a simple code like :
def calc(count):
result[1]=1
n=2
fo开发者_运维技巧r range(1,count):
if n%2=0:
result.append(result[n])
else :
result.append(result[n/2]+result[n/2+1])
return result
Both of these approaches are correct. It is indeed legal to have multiple recursive calls from a function, and the meaning is what you'd think - just do one call, then the next, then the next, etc.
Interestingly, I don't think that the recursive version does make exponentially many calls. It makes at most two recursive calls, but each are on a problem whose size is (approximately) half as large as the original call. Essentially, the recurrence looks like this:
T(1) = 1
T(2) = 1
T(n) <= T(n / 2) + T(n / 2 + 1) + 1
I use "less than or equal to here" to say that in the best case you might just make one call, but in the worst case you make at most two.
I want to prove that this function T(n) <= max{cn + d, a} for some constants c, d, and a. This would prove that T(n) = O(n) and thus makes at most linearly many calls. As our base case, we have that
T(1) = 1
T(2) = 1
so let's set a = 1. For the inductive step, we consider three cases. First, let's consider when floor(n / 2) <= 2 and floor(n / 2 + 1) <= 2:
T(n) <= T(n / 2) + T(n / 2 + 1) + 1
<= 1 + 1 + 1
<= 3
If we assume that cn + d >= 3, when n = 3 or n = 4, then this works out correctly. In particular, this means that 3c + d >= 3 and 4c + d >= 3.
In the next case, let's see what happens if floor(n / 2) <= 2 and floor(n / 2 + 1) >= 2. Then we have that
T(n) <= T(n / 2) + T(n / 2 + 1) + 1
<= 1 + max{c(n / 2 + 1) + d, 1} + 1
<= 2 + max{c(n / 2 + 1) + d, 1}
<= 3 + c(n / 2 + 1) + d
So if we have that 3 + c(n / 2 + 1) + d <= cn + d, this claim still holds. Note that we're only in this case if n = 5, and so this means that we must have that
3 + c(n / 2 + 1) + d <= cn + d
3 + c(n / 2 + 1) <= cn
3 + c(5 / 2 + 1) <= 5c
3 + 5c/2 + c <= 5c
3 + 7c/2 <= 5c
4 <= 3c / 2
8 / 3 <= c
So we must have that c >= 8 / 3.
And finally, the case where neither n / 2 nor n / 2 + 1 are less than three:
T(n) <= T(n / 2) + T(n / 2 + 1) + 1
<= c(n / 2) + d + c(n / 2 + 1) + d + 1
<= cn / 2 + cn / 2 + c + 2d + 1
= cn + c + 2d + 1
This is less than cn + d if
cn + c + 2d + 1 <= cn + d
c + 2d + 1 <= d
c + d + 1 <= 0
This works if d = -c - 1.
From earlier, we know that 3c + d >= 3, which works if 2c - 1 >= 3, or 2c >= 4, so c >= 2. We also have that 4c + d >= 3, which also works if c >= 2. Letting c = 8 / 3, we get that d = -11 / 3, so
T(n) <= max{8n/3 - 11/3, 1}
So T(n) = O(n) and the recursion makes only linearly many calls.
The short version of this is that both the recursive and iterative versions take linear time. Don't fear recursion's exponential-time blowup without being sure it's exponential. :-) Though admittedtly, in this case, I really like the iterative version more. It's clearer, more intuitive, and more immediately O(n).
Recursion isn't bad. Think of recursion as a tool. Some problems can be solved quite nicely with recursion (e.g. Fibonacci numbers), while other problems don't have a recursive bend (a worklist type algorithm).
Why people go awry with recursion is that they think a recursive solution will give them a good reputation. I'd say, solve the problem with the simplest approach possible.
Sometimes that approach is recursion, sometimes it isn't.
Also, a recursive approach can be harder to understand. So, you need to take readability into consideration.
It is easier to estimate the performance of your second algorithm: it is clearly O(n) in space and time, with a small constant. But that does not mean the second algorithm is faster than the first. In fact it isn't!
In Python 2.6.5, your second algorithm is 25x slower than the first at calculating calc(1000)
, 80x slower at calculating calc(10000)
, and 1712x slower at calculating calc(65535)
.
The worst-case behavior for the first algorithm, it seems, is for numbers like 54613 = 11010101010101012. For that value, the first algorithm is only 16x faster than the second.
In the simple recursive algorithm, calc(1000)
only involves 50 calls to calc
, and even calc(54613)
requires only 5166.
This O(log n) algorithm is faster still:
def C(x):
a = 1
b = 0
# Loop invariant: we are going to return a*C(x) + b*C(x + 1).
while x >= 2:
if x % 2 == 0:
a += b
else:
b += a
x //= 2
return a + b
The performance of this algorithm is easy to evaluate, but its correctness isn't. ;-)
You can write a recursive version which is sublinear (O(log n)) in fact.
The key is to notice that you can compute two values for n and n+1 given the two values for [n/2] and [n/2]+1.
So if you think of the two values as one tuple T(n) = (F(n), F(n+1)), then given T([n/2]), you can compute T(n).
So your function will be something like
Tuple F(int n) {
// Base cases:
Tuple result;
if (n is even) {
Tuple t = F(n/2);
result.First = t.First;
result.Second =t.First + t.Second;
return result;
}
if (n is odd) {
Tuple t = F((n-1)/2);
result.First = t.First + t.Second;
result.Second = t.Second;
return result;
}
}
This way you end up making just one recursive call and since you halve the size of input for each call, it is recursive and O(logn).
Exercise: Give an O(log n) time algorithm for Fibonacci numbers using this trick.
Hint: Use the identities:
精彩评论