A few questions on Cuckoo Hashing
I need to construct a fast lookup table with 8-byte integer keys. The table construction is done during initialization and the data is not updated after that. The number of data items is under 100K so I can afford using extra space to make the hash table sparse. However the data lookup has to be as efficient as possible. As far as I understand, Cuckoo Hashing seems like a good fit for this kind of scenario. However, I am not开发者_StackOverflow very clear about a few things:
What family of hash functions should be used in this case? Some papers suggest that the standard "((a*x + b) mod p) mod m" function family is not a good choice. Moreover, p has to be a prime > UInt64.MaxValue, which makes it difficult to compute the function. The multiply-shift "(a*x) >> (w - log(m))" family is not considered a good choice either. I couldn't find a definitive answer on what function to use.
The "insert" operation can trigger rehashing. So in theory the insertion time is unbounded in the worst case (you just keep choosing a "bad" function which results in rehashing). I understand, that the probability of this is close to zero but I have hard time simply ignoring this issue in production.
Are there any better data structures for the problem described? The original Cuckoo Hash paper suggests that a simple linear probing hash can be more efficient when you have enough extra space (two-three times the number of items). Also, during the construction phase, I can check if there are more than two keys that collide and choose a different hashing function (I can afford doing it a few times and choosing the best one).
Thanks a lot for your responses.
Almost any hash function will do, since you only have 100K keys just make sure it's at least 2-wise independent (see http://www.eecs.harvard.edu/~michaelm/postscripts/soda2008b.pdf) or just use something fast.
The insert procedure will work in amortized/expected O(1) time if you do an exhaustive search, since you are doing this at the beginning you'll be fine. If you have less than 50% utilization (i.e. number of slots > 2x number of keys) the probability of an insert triggering a rehash is small. You can make this even smaller by using a stash ( http://www.eecs.harvard.edu/~michaelm/postscripts/esa2008full.pdf) and look-up is still small. In any case just retry until everything works since you are only doing this at the initialization.
Cut twice, measure once.
精彩评论