Are there URL specific hashCode methods?
Is there a way for a memory efficient "ID generation" of an URL?
At the moment I have a cache ala Set<String>
for my URLs and I can easily check if the URL was already resolved by my crawler or not. Now this requires a lot of memory and I replaced it with Set<Long>
and used the hashCode of the URLs. The problem now is that even for 40k URLs there are 10 conflicts. An improved method which uses long instead of the int hashCode
improves it a bit to 6 conflicts, but especially short urls look very similar at the beginning made problems:
5852015146777169869 http://twitpic.com/5xuwuk vs. http://twitpic.com/5xuw7m 5852015146777169869
So I ended up in the following URL-specific double hashing method which gives no conflicts for 2.5mio URLs which is fine for me:
public static long urlHashing(String str) {
if (str.length() < 2)
return str.hashCode();
long val = longHashCode(str, 31, false);
if (str.length() > 3)
// use the end of the string because those short URLs
// are often identical at the beginning
return 43 * val + longHashCode(str.substring(str.length() / 2), 37, true);
return val;
}
public static long longHashCode(String str, int num, boolean up) {
int len = str.length();
if (len == 0)
return 0;
long h = 0;
// copying to a temp arry is a only a tiny bit slower in our case.
// so this here is ~2ms faster for 40k urls
if (up)
for (int i = 0; i < len;) {
h = num * h + str.charAt(i++);
}
else
for (int i = len - 1; i >= 0;) {
h = num * h + str.charAt(i--);
}
return h;
}
BUT Now I wondered: are there some theories or (google ;)) papers about URL specific hashing algorithms? Or simply: can I further reduce the conflicts for URLs or do you see any problems or improvements for my current solution?
Update:
- Another approac开发者_运维知识库h is to separate the URL by protocol, address and file like it is done in the
new URL(str).hashCode()
method (which cannot be directly used as it is very slow -> it resolves the URL on the fly :/) - See squid-cache.org or the CacheDigest explanation
If you want something that works all the time, not just most of the time, short hashes aren't going to cut it. At any length shorter than about 128 bits, as you've observed, even an ideal hash will have a significant collision rate.g. What you have is a scaling problem, and all you're done by using hash codes is reduce the constant factor- it's still O(n).
It sounds like your strings have a lot of prefixes in common, though - have you considered using a trie to store them?
You should probably use an MD5 hash. The collision rate should be much smaller.
精彩评论