Simple NLP: How to use ngram to do word similarity?
I Hear that google uses up to 7-grams for their semantic-similarity comparison. I am interested in finding words that are similar in context (i.e. cat and dog) and I was wondering how do I compute the similarity of two words on a n-gram 开发者_如何学Gomodel given that n > 2.
So basically given a text, like "hello my name is blah blah. I love cats", and I generate a 3-gram set of the above:
[('hello', 'my', 'name'), ('my', 'name', 'is'), ('name', 'is', 'blah'), ('is', 'blah', 'blah'), ('blah', 'blah', 'I'), ('blah', 'I', 'love'), ('I', 'love', 'cats')]
PLEASE DO NOT RESPOND IF YOU ARE NOT GIVING SUGGESTIONS ON HOW TO DO THIS SPECIFIC NGRAM PROBLEM
What kind of calculations could I use to find the similarity between 'cats' and 'name'? (which should be 0.5) I know how to do this with bigram, simply by dividing freq(cats,name)/ ( freq(cats,) + freq(name,) ). But what about for n > 2?
I googled "similarities between trigrams" and came up with this article which breaks words up into 3 letter segments. I know that is not exactly what you are looking for, but maybe this will help enough to get you going.
The article also compares 2 words based on the 3 letter approach. It seems like the comparison would need to be between two search terms, like "hello my name is blah blah. I love cats" and "my name is something else. I love dogs". Of course I don't know much about the domain, so if that is incorrect, my apologies, I was just hoping to spur some thought for your question.
I don't know how google works but one known method is calculating the co-occurrence in documents given words. Taking into account, google has all documents possible then it is pretty easy to calculate that factor and occurrence of a word (frequency) you can then get a bond factor between words. It is not a measure of similarity (like cat and dog) but rather something more collocation.
Take a look:
https://en.wikipedia.org/wiki/Tf–idf
Another approach would be to drop internet documents, only focus on dictionary entries, there were several attempts to parse those entries and build "common knowledge" system. This way you could get relationships automatically (WordNet and alike are manually crafted).
精彩评论