python -regex matching a word list
I have a python script which has probably 100 or so regex lines each line matching certain words.
The script obviously consumes up to 100% cpu each time it is run (I basically pass a sentence to it and it will return any matched words found).
I want to combine these into around 4 or 5 different "compiled" regex parsers such as:
>>> words = ('hello', 'good\-bye', 'red', 'blue')
>>> pattern = re.compile('(' + '|'.join(words) + ')', re.IGNORECASE)
How many words can I safely have in this and would it make a difference? Right now if I run a loop on a thousand random sentences it processes maybe 10 a second, looking to drastically increase this speed so it does like 500 a second (if possible).
Also, is it possible to a list like this?
>>> 开发者_如何学JAVAwords = ('\d{4,4}\.\d{2,2}\.\d{2,2}', '\d{2,2}\s\d{2,2}\s\d{4,4}\.')
>>> pattern = re.compile('(' + '|'.join(words) + ')', re.IGNORECASE)
>>> print pattern.findall("Today is 2010 11 08)
Your algorithm here is basically O(N*M*L)
(where N
is the length of the sentence, M
is the number of words you're looking for, and L
is the longest word you're looking for) for each sentence. Using a regex won't speed this up any more than just using find. The only thing it does give you is the ability to match patterns like your second example.
If you want to just find words, a Trie would be a much better approach. The implementation is really easy, too:
TERMINAL = 'TERMINAL' # Marks the end of a word
def build(*words, trie={}):
for word in words:
pointer = trie
for ch in word:
pt = pt.setdefault(ch, {TERMINAL:False})
pt[TERMINAL] = True
return trie
def find(input, trie):
results = []
for i in range(len(input)):
pt = trie
for j in range(i, len(input)+1):
if pt[TERMINAL]:
results.append(input[i:j])
if j >= len(input) or input[j] not in pt:
break
pt = pt[input[j]]
return results
This returns all words in your sentence that are in the trie. Running time is O(N*L)
, meaning that you can add as many words as you want without slowing down the algorithm.
精彩评论