Word tokenization using python regular expressions
I am trying to split strings into lists of "tags" in python. The splitting should handle strings such as "HappyBirthday" and remove most punctuation but preserve hyphens, and apostrophes. My starting point is:
tags = re.findall("([A-Z]{2,}(?=[A-Z]|$)|[A-Z][a-z]*)|\w+-\w+|[\w']+"
I would want to turn this sample data:
Jeff's dog is un-American SomeTimes! BUT NOTAlways
Into:
['Jeff's', 'dog', 'is', 'un-American', 'Some', 'Times', 'BUT', 'NOT', 'Always']
P.S. I am sorry my description isn't very good. I am not sure how to explain it, and have been mostly unsuccessful with google. I hope the example illustrates it properly.
Edit: i think i needed to开发者_StackOverflow社区 be more precise, so also,
- if the word is hypenated and capital, like 'UN-American' will it keep it as one word so output would be 'UN-American'
- if the hyphen has a space on either or both sides, a la 'THIS- is' or 'This - is' it should ignore the hypen and produce ["THIS", "is"] and ["This", "is"] respecticly,
- and simmilarly for an apostrophe if its in the middle of a word like "What'sItCalled" it should produce ["What's","It", "Called"]
I suggest the following:
re.findall("[A-Z]{2,}(?![a-z])|[A-Z][a-z]+(?=[A-Z])|[\'\w\-]+",s)
This yields for your example:
["Jeff's", 'dog', 'is', 'un-American', 'Some', 'Times', 'BUT', 'NOT', 'Always']
Explanation: The RegExp is made up of 3 alternatives:
[A-Z]{2,}(?![a-z])
matches words with all letters capital[A-Z][a-z]+(?=[A-Z])
matches words with a first capital letter. The lookahead(?=[A-Z])
stops the match before the next capital letter[\'\w\-]+
matches all the rest, i.e. words which may contain'
and-
.
To handle your edited cases, I'd modify phynfo (+1) great answer to
>>> s = """Jeff's UN-American Un-American un-American
SomeTimes! BUT NOTAlways This- THIS-
What'sItCalled someTimes"""
>>> re.findall("[A-Z\-\']{2,}(?![a-z])|[A-Z\-\'][a-z\-\']+(?=[A-Z])|[\'\w\-]+",s)
["Jeff's", 'UN-', 'American', 'Un-', 'American', 'un-American',
'Some', 'Times', 'BUT', 'NOT', 'Always', 'This-', 'THIS-',
"What's", 'It', 'Called' 'someTimes']
You have to clearly define the rules for your wanted behaviors. Tokenization isn't a definition, you have to have something similar to phynfo's rules. E.g., you have a rule that 'NOTAlways'
should go to 'NOT'
, and 'Always'
, and that hyphens should be preserved. Thus 'UN-American'
is split up, just like UNAmerican would be split up. You can try defining an additional rules, but you have to be clear about which rule is applied when rules overlap.
精彩评论