Does using functional languages help against computing values repeatedly?
Consider a function f(x,y):
f(x,0) = x*x;
f(0,y) = y*(y + 1);
f(x,y) = f(x,y-1) + f(x-1,y);
开发者_如何学运维
If one tries to implement that recursively in some language like C++ he will encounter a problem.
Suppose the function is first called with x = x0
and y = y0
. Then for any pair (x,y) where 0 <= x < x0
and 0 <= y < y0
the intermediate values will be computed multiple times - recursive calls will form a huge tree in which multiple leaves will in fact contain the same pairs (x,y). For pairs (x,y) where x and y are both close to 0 values will be computed numerous times.
For instance, I tested a similar function implemented in C++ - for x=20
and y=20
its computation takes about 4 hours (yes, four Earth hours!).
Obviously the implementation can be rewritten in such way that repeated computation doesn't occur - either iteratively or with a cache table.
The question is: will functional languages perform any better and avoid repeated computations when implementing a function like above recursively?
The term you're looking for here is Memoization:
In computing, memoization is an optimization technique used primarily to speed up computer programs by having function calls avoid repeating the calculation of results for previously-processed inputs.
No, functional language do not automatically implement memoization. You can implement it in them, but in C++ as well. It is true, however, that when you write purely functional code (i.e. with no side effects), then memoization is easier. Some dynamic languages (Perl, for instance) have auto-memoization modules that can easily memoize any function. There's a discussion of this subject in the Automatic memoization section of the Wikipedia article.
For example here's a naive C++ Fibonacci:
long fib_rec(long index)
{
if (index < 2)
return index;
else
return fib_rec(index - 1) + fib_rec(index - 2);
}
And here's a memoized version:
long fib_memoized_aux(vector<long>& cache, long index)
{
if (cache[index] >= 0)
return cache[index];
cache[index] = fib_memoized_aux(cache, index - 1) + fib_memoized_aux(cache, index - 2);
return cache[index];
}
long fib_memoized(long index)
{
vector<long> cache(index + 1, -1);
cache[0] = 0;
cache[1] = 1;
return fib_memoized_aux(cache, index);
}
精彩评论