I am building a search engine. How do I remove duplicates from search results?
When I search for开发者_Go百科 something, I get content that have the same text and title. Of course, there is always an original (where others copy/leech from)
If you have expertise in search and crawling...how do you recommend that I remove these duplicates? (in a very feasible and efficient mannter)
Sounds like a programming question to me.
If you have a clear idea about what the stolen and original components of these pages are, and those differences are general enough that you can write a filter to separate them, then do that, hash the 'stolen' content, and then you should be able to compare hashes to determine if two pages are the same.
I guess web-page thieves might go to some further code-obfuscation to mess you up, including changing whitespace, so you might want to normalise the html before hashing, for instance removing any redundant whitespace, making all attributes use "
quotes etc.
Here's a technique based on simhash.
Here's one that uses stopwords to work around ads.
Have you tried looking at the origin date of the site? After comparing a value of word strings to verify duplication, whitelist the one that is earlier.
精彩评论