What's wrong with this Interpolation search implementation?
This is a common C/C++ implementation of the Interpolation Search algorithm found around the Internet. However, when used with a sorted array of some 100000 integers, the mid-variable starts generating negative array-indexes, causing a Segmentation Fault. What could the problem be?
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
int interpolationSearch(int sortedArray[], int toFind, int len) {
// Returns index of toFind in sortedArray, or -1 if not found
int low = 0;
int high = len - 1;
int mid;
while (sortedArray[low] <= toFind && sortedArray[high] >= toFind) {
mid = low + ((toFind - sortedArray[low]) * (high - low)) /
(sortedArray[high] - sortedArray[low]);
if (sortedArray[mid] < toFind) {
low = mid + 1;
} else if (sortedArray[mid] >开发者_C百科; toFind) {
high = mid - 1;
} else {
return mid;
}
}
if (sortedArray[low] == toFind)
return low;
else
return -1; // Not found
}
int main(void) {
srand(time(0));
int arr[100000];
for (int i=0; i<100000; i++) {
arr[i] = rand()%100000;
}
int length = sizeof(arr)/sizeof(int);
qsort(arr,length,sizeof(int),order);
for (int j=0; j<10000; j++) {
interpolationSearch(arr,rand()%100000,length);
}
}
The sub-expression: ((toFind - sortedArray[low]) * (high - low))
... can easily evaluate to something like: ((99999-0) * (99999-0)) == 99999^2
... which is much larger than 2^31 (== the range of 32-bit signed integers).
Once it exceeds 2^31-1, the integer will overflow into negative numbers, hence your negative indices. If it exceeds 2^32 (which it also could do), then (most likely, technically undefined) you'll lose the high-order bits and you'll end up with effectively random offsets, both positive and negative.
To avoid all of this, you need to do your math carefully to make sure none of your sub-expressions yield an integer overflow. Usually the easiest way to do this is to convert to floating-point whose range is many orders of magnitude larger than 32-bit integers.
In the final analysis, interpolation such as this for binary search is usually not worth it -- the expense of computing the interpolant is typically greater than the few extra iterations of the loop that it "saves".
As the other answers have explained, you're trying to compute an expression of the form
A * B / C
but this goes wrong because A * B
overflows. The suggestion to revise the expression to
A * (B / C)
won't work, because typically B
is less than C
and so the integer division will truncate to zero.
The suggestion to switch to floating-point would work, but would be costly. But you could use fixed point by transforming the expression to:
A * ((B * F) / C) / F
(where F
is a carefully chosen power of two).
The problem is with the expression that computes mid
. The product can easily overflow even with 32 bits integers. Then it becomes negative. It would probably be better to perform division before product.
Changing the mid computing to use 64 bits integers (at least for intermediate computings) fix problems.
Below is my modified version (int64_t is defined in <stdint.h>
:
int interpolationSearch(int sortedArray[], int toFind, int len) {
// Returns index of toFind in sortedArray, or -1 if not found
int low = 0;
int high = len - 1;
int mid;
int l = sortedArray[low];
int h = sortedArray[high];
while (l <= toFind && h >= toFind) {
int64_t high_low = (high - low);
int64_t toFind_l = (toFind - l);
int64_t product = high_low*toFind_l;
int64_t h_l = h-l;
int64_t step = product / h_l;
mid = low + step;
/* mid = (low + high)/2;*/
int m = sortedArray[mid];
if (m < toFind) {
l = sortedArray[low = mid + 1];
} else if (m > toFind) {
h = sortedArray[high = mid - 1];
} else {
return mid;
}
}
if (sortedArray[low] == toFind)
return low;
else
return -1; // Not found
}
An even simpler fix would be to make it a dichotomic search instead of interpolation by just using: mid = (low + high) / 2
. even if it converges slightly slower than interpolation, it avoids several operations including a product and a division thus making the inner loop faster. Not sure the potential faster convergence of interpolation compensates for that loss of simplicity.
I did some performance tests. The source of my test program is included in this question
Suprisingly (for me) using floats gives a more efficient program than using large integers. On my system binary search became faster for about 1000 items in the array. For arrays of size 100000 interpolation search is nearly two times faster than simple binary search.
精彩评论