开发者

Identifying important words and phrases in text

I have text stored in 开发者_开发问答a python string.

What I Want

  1. To identify key words in that text.
  2. to identify N-grams in that text (ideally more than just bi and tri grams).

Keep in mind...

  • The text might be small (e.g. tweet sized)
  • The text might be middle (e.g. news article sized)
  • The text might be large (e.g. book or chapter sized)

What I Have

I'm already using nltk to break the corpus into tokens and remove stopwords:

    # split across any non-word character
    tokenizer = nltk.tokenize.RegexpTokenizer('[^\w\']+', gaps=True)

    # tokenize
    tokens = tokenizer.tokenize(text)

    # remove stopwords
    tokens = [w for w in tokens if not w in nltk.corpus.stopwords.words('english')]

I'm aware of the BigramCollocationFinder and TrigramCollectionFinder which does exaclty what I'm looking for for those two cases.

The Question

I need advice for n-grams of higher order, improving the kinds of results that come from BCF and TCF, and advice on the best way to identify the most unique individual key words.

Many thanks!


As for the best way to identify the most unique individual key words, tfidf is the total measure. So, you have somehow to integrate a search engine ( or make a simple custom inverted index that is dynamic and holds term frequencies, document frequencies ) as to calculate tfidf efficiently and on-the-fly.

As for your N-grams, why don't you create a custom parser using a "window" approach ( window is of length N) that identifies, say, the most frequent of them? ( just keep every N-gram as a key in a dictionary with value either the frequency or a score (based on tfidf of individual terms))

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜