I\'m building a NLP application and have been using the Stanford Parser for most of my parsing work, but I would like to start using Python.
I\'m doing a project on mining blog contents and I need help differentiating on which tool to uses. When do I use a parser, when do I use a tagger, and when do I need to use a NER tool?
I\'m working on a project at the moment where it would be really useful to be able to detect when a certain topic/idea is mentioned in a body of text. For instance, if the text contained:
I have a large dataset (c. 40G) that I want to use for some NLP (largely embarrassing开发者_C百科ly parallel) over a couple of computers in the lab, to which i do not have root access, and only 1G of
I am trying to import NLTK in my python code and I get this error: Traceback (most recent call last): File \"/home/afs/NetBeansProjects/NER/getNE_followers.py\", line 7, in <module>
I am curious if there is an algorithm/method exists to generate keywords/tags from a given text, by using s开发者_JAVA百科ome weight calculations, occurrence ratio or other tools.
I have a web application that tran开发者_运维技巧slates sentences into English; the user chooses options from drop downs that basically provide the context. Now I want to turn the word and the context
which part of huge package nltk 开发者_开发技巧I must study and use, if I need mark geonames in text?You\'ll want to use their named entity recognizer nltk.ne_chunk.
I\'m trying to run a Python script using exec() from within PHP. My command works fine when I run it directly using a cmd window, but it produces an error when I run it from exec() in PHP.
This question already has answers here: Creating a new corpus with NLTK (4 answers) Closed 9 years ago. I have recently expanded the names corpus in nltk and would like to know how I can