开发者

NLP algorithm to 'fill out' search terms

I'm trying to write an algorithm (which I'm assuming will rely on natural language processing techniques) to 'fill out' a list of search terms. There is probably a name for this kind of thing which I'm unaware of. What is this kind of problem called, and what kind of algorithm will give me the following behavior?

Input:

    docs = [
    "I bought a ticket to the Dolphin Watching cruise",
    "I enjoyed the Dolphin Watching tour",
    "The Miami Dolphins lost again!",
    "It was good going to that Miami Dolphins game"
    ], 
    search_term = "Dolphin"

Output:

["Dolphin Watchin开发者_如何学Pythong", "Miami Dolphins"]

It should basically figure out that if "Dolphin" appears at all, it's virtually always either in the bigrams "Dolphin Watching" or "Miami Dolphins". Solutions in Python preferred.


It should basically figure out that if "Dolphin" appears at all, it's virtually always either in the bigrams "Dolphin Watching" or "Miami Dolphins".

Sounds like you want to determine the collocations that Dolphin occurs in. There are various methods for collocation finding, the most popular being to compute point-wise mutual information (PMI) between terms in your corpus, then select the terms with the highest PMI for Dolphin. You might remember PMI from the sentiment analysis algorithm that I suggested earlier.

A Python implementation of various collocation finding methods is included in NLTK as nltk.collocations. The area is covered in some depth in Manning and Schütze's FSNLP (1999, but still current for this topic).


I used the Natural Language Toolkit in my NLP class in university with decent success. I think it's got some taggers that can help you determine which are the nouns, and help you parse it into a tree. I don't remember much, but I'd start there.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜