tokenize in python3.x
I have following codes in python2.x:
class _CHAIN(object):
def __init__(self, execution_context=None):
self.execution_context = execution_context
def eat(self, toktype, tokval, rowcol, line, logical_line):
#some code and error checking
operations = _CHAIN(execution_context)
tokenize(StringIO(somevalue).readline, operations.eat)
Now problem is that in python3.x second argument does not exist. I need to call the function operations.eat() before tokenize. How can we perform the above task in python3.x. One idea is to directly call the function tokenize.eat() before 'tokenize' statement(last line of the cod开发者_如何转开发e). But I am not sure about the arguments to be passed. I'm sure there must be better ways to do it.
You're using a slightly odd legacy system where you pass the function an iterable along with a callable which can accept the tokens. The new way is conceptually more simple, and works in both Python 2 and 3:
from tokenize import generate_tokens
for token in generate_tokens(StringIO(somevalue).readline):
eat(token)
This is technically undocumented for Python 3, but unlikely to be taken away. The official tokenize
function in Python 3 expects bytes, rather than strings. There's a request for an official API to tokenize strings, but it seems to have stalled.
Accoring to http://docs.python.org/py3k/library/tokenize.html, you should now use tokenize.tokenize(readline)
:
import tokenize
import io
class _CHAIN(object):
def __init__(self, execution_context=None):
self.execution_context = execution_context
def eat(self, toktype, tokval, rowcol, line, logical_line):
#some code and error checking
print(toktype, tokval, rowcol, line, logical_line)
operations = _CHAIN(None)
readline = io.StringIO('aaaa').readline
#Python 2 way:
#tokenize.tokenize(readline, operations.eat)
#Python 3 way:
for token in tokenize.generate_tokens(readline):
operations.eat(token[0], token[1], token[2], token[3], token[4])
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import string
import pymorphy2
import re
import nltk
nltk.download('punkt')
reg = re.compile('[^а-яА-Я ]')
morph = pymorphy2.MorphAnalyzer()
stop_words = stopwords.words('russian')
def sentence(words):
words = reg.sub('', words)
words = word_tokenize(words, language = 'russian')
tokens = [i for i in words if (i not in string.punctuation)]
tokens = [i for i in tokens if (i not in stop_words)]
tokens = [morph.parse(word)[0].normal_form for word in tokens]
tokens = [i for i in tokens if (i not in stop_words)]
return tokens
df['text']=df['text'].apply(str)
df['text'] = df['text'].apply(lambda x: sentence(x))
df['text'] = df['text'].apply(lambda x: " ".join(x))
精彩评论