开发者

Adding words to nltk stoplist

I have some code that removes stop words from my data set, as the stop list doesn't seem to remove a majority of the words I would like it too, I'm looking to add words to this stop list so that it will remove them for this case. The code i'm using to remove stop words is:

word_list2 = [w.strip() for w in word_list if w.strip() not in nltk.corpus.stopwords.words('english')]

I'm unsure of the correct syntax for adding words and can't seem to find the correct one anywhere. Any help is appr开发者_如何学Ceciated. Thanks.


You can simply use the append method to add words to it:

stopwords = nltk.corpus.stopwords.words('english')
stopwords.append('newWord')

or extend to append a list of words, as suggested by Charlie on the comments.

stopwords = nltk.corpus.stopwords.words('english')
newStopWords = ['stopWord1','stopWord2']
stopwords.extend(newStopWords)


import nltk
stopwords = nltk.corpus.stopwords.words('english')
new_words=('re','name', 'user', 'ct')
for i in new_words:
    stopwords.append(i)
print(stopwords)


The way how I did on my Ubuntu machine was, I ctrl + F for "stopwords" in root. It gave me a folder. I stepped inside it which had different files. I opened "english" which had barely 128 words. Added my words to it. Saved and done.


The english stop words is a file within nltk/corpus/stopwords/english.txt (I guess it would be here...i dont have nltk on this machine..best thing would be to search 'english.txt within nltk repo)

You can just add your new stop words in this file.

also try looking at bloom filters if your stop word list increases to few hundreds


I always do stopset = set(nltk.corpus.stopwords.words('english')) at the top of any module that needs it. Then it's easy to add more words to the set, plus membership checks are faster.


Was also looking for solution on this. After some trail and error I got to add words to the stoplist. Hope this helps.

def removeStopWords(str):
#select english stopwords
cachedStopWords = set(stopwords.words("english"))
#add custom words
cachedStopWords.update(('and','I','A','And','So','arnt','This','When','It','many','Many','so','cant','Yes','yes','No','no','These','these'))
#remove stop words
new_str = ' '.join([word for word in str.split() if word not in cachedStopWords]) 
return new_str


I use this code for adding new stop words to nltk stop word list in python

from nltk.corpus import stopwords
#...#
stop_words = set(stopwords.words("english"))

#add words that aren't in the NLTK stopwords list
new_stopwords = ['apple','mango','banana']
new_stopwords_list = stop_words.union(new_stopwords)

print(new_stopwords_list)


I've found (Python 3.7, jupyter notebook on Windows 10, corporate firewall) that creating a list and using the 'append' command results in the entire stopwords list being appended as an element of the original list.

This makes 'stopwords' into a list of lists.

Snijesh's answer works well, as does Jayantha's answer.


 import nltk
 nltk.download('stopwords')
 from nltk.corpus import stopwords
 #add new words to the list
 new_stopwords = ["new", "custom", "words", "add","to","list"]
 stopwrd = nltk.corpus.stopwords.words('english')
 stopwrd.extend(new_stopwords)


STOP_WORDS.add(“Lol”) #Add new stopword into corpus as you wish

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜