Besides NLTK, what is the best information retrieval library for Python? [closed]
F开发者_开发问答or use to analyze documents on the Internet!
Alternatively, R has many tools available for text mining, and it's easy to integrate with Python using RPy2.
Have a look at the Natural Language Processing view on CRAN. In particular, look at the tm
package. Here are some relevant links:
- Paper about the package in the Journal of Statistical Computing: http://www.jstatsoft.org/v25/i05/paper. The paper includes a nice example of an analysis of the R-devel mailing list (https://stat.ethz.ch/pipermail/r-devel/) newsgroup postings from 2006.
- Package homepage: http://cran.r-project.org/web/packages/tm/index.html
- Look at the introductory vignette: http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf
In addition, R provides many tools for parsing HTML or XML. Have a look at this question for an example using the RCurl
and XML
packages.
Could you please provide more information why NLTK is insufficient or what features you need to consider some framework the "best"?
Nevertheless, there is the builtin shlex lexical parsing library.
There is also a recent book on the subject, Natural Language Processing with Python. It looks like at least part of it covers NLTK.
You might also want to look at this list of tutorials and libraries on the awaretek website, which also points to the NLQ.py framework.
Natural Language Processing with Python http://ecx.images-amazon.com/images/I/41NBqj7NyGL._BO2.jpg
精彩评论