开发者

Detecting similar words among n text documents

I have n documents and want to find common words that are included in these documents. For example I want to say (n-3) documents incl开发者_Python百科ude the word "web".

Certainly I can do this by basic data structures but there maybe efficient algorithm or a way to handle same words with different suffix. Is there any algorithm for such purposes?

I am unfamiliar with datamining world. In general manner is there a term used for efforts of finding similarities between different documents? If there is then I will make my research easily.

Thanks.


I suppose that you are talking about stemming. If you want to use the R language, you'll have to work with the tm package.

  • Introduction to the tm Package
  • Text Mining Infrastructure in R

If not, I can only suggest this list of text mining tools


You can do it by producing a word-list with counts for each document, sorting the word-list alphabetically, and comparing two lists. This is O(n lg n).

Another approach is to use the full text search as provided by your database of choice.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜