Is this searching algorithm optimal?
I have two lists, L and M, each containing thousands of 64-bit unsigned integers. I need to find out whether the su开发者_如何转开发m of any two members of L is itself a member of M.
Is it possible to improve upon the performance of the following algorithm?
Sort(M)
for i = 0 to Length(L)
for j = i + 1 to Length(L)
BinarySearch(M, L[i] + L[j])
(I'm assuming your goal is to find all pairs in L that sum to something in M)
Forget hashtables!
Sort both lists.
Then do the outer loop of your algorithm: walk over every element i in L, then every larger element j in L. As you go, form the sum and check to see if it's in M.
But don't look using a binary search: simply do a linear scan from the last place you looked. Let's say you're working on some value i, and you have some value j, followed by some value j'. When searching for (i+j), you would have got to the point in M where that value is found, or the first largest value. You're now looking for (i+j'); since j' > j, you know that (i+j') > (i+j), and so it cannot be any earlier in M than the last place you got. If L and M are both smoothly distributed, there is an excellent chance that the point in M where you would find (i+j') is only a little way off.
If the arrays are not smoothly distributed, then better than a linear scan might be some sort of jumping scan - look forward N elements at a time, halving N if the jump goes too far.
I believe this algorithm is O(n^2), which is as fast as any proposed hash algorithm (which have an O(1) primitive operation, but still have to do O(n**2) of them. It also means that you don't have to worry about the O(n log n) to sort. It has much better data locality than the hash algorithms - it basically consists of paired streamed reads over the arrays, repeated n times.
EDIT: I have written implementations of Paul Baker's original algorithm, Nick Larsen's hashtable algorithm, and my algorithm, and a simple benchmarking framework. The implementations are simple (linear probing in the hashtable, no skipping in my linear search), and i had to make guesses at various sizing parameters. See http://urchin.earth.li/~twic/Code/SumTest/ for the code. I welcome corrections or suggestions, about any of the implementations, the framework, and the parameters.
For L and M containing 3438 items each, with values ranging from 1 to 34380, and with Larsen's hashtable having a load factor of 0.75, the median times for a run are:
- Baker (binary search): 423 716 646 ns
- Larsen (hashtable): 733 479 121 ns
- Anderson (linear search): 62 077 597 ns
The difference is much bigger than i had expected (and, i admit, not in the direction i had expected). I suspect i have made one or more major mistakes in the implementation. If anyone spots one, i really would like to hear about it!
One thing is that i have allocated Larsen's hashtable inside the timed method. It is thus paying the cost of allocation and (some) garbage collection. I think this is fair, because it's a temporary structure only needed by the algorithm. If you think it's something that could be reused, it would be simple enough to move it into an instance field and allocate it only once (and Arrays.fill it with zero inside the timed method), and see how that affects performance.
The complexity of the example code in the question is O(m log m + l2 log m) where l=|L| and m=|M| as it runs binary search (O(log m)) for every pair of elements in L (O(l2)), and M is sorted first.
Replacing the binary search with a hash table reduces the complexity to O(l2) assuming that hash table insert and lookup are O(1) operations.
This is asymptotically optimal as long as you assume that you need to process every pair of numbers on the list L, as there are O(l2) such pairs. If there are a couple of thousands of numbers on L, and they are random 64-bit integers, then definitely you need to process all the pairs.
Instead of sorting M
at a cost of n * log(n)
, you could create a hash set at the cost of n
.
You could also store all sums in another hash set while iterating and add a check to make sure you don't perform the same search twice.
You can avoid binary search by using hashtable except sorted M array.
Alternatively, add all of the members of L to a hashset lSet, then iterate over M, performing these steps for each m in M:
- add m to hashset mSet - if m is already in mSet, skip this iteration; if m is in hashset dSet, also skip this iteration.
- subtract each member l of L less than m from m to give d, and test whether d is also in lSet;
- if so, add (l, d) to some collection rSet; add d to hashset dSet.
This will require fewer iterations, at the cost of more memory. You will want to pre-allocate the memory for the structures, if this is to give you a speed increase.
精彩评论