python re codecs ä ö, finnish language, define that is word
IS it possible to define that specific languages characters would be considered as word. I.e. re do not accept ä,ö as word characters if i search them in following way
Ft=codecs.open('c:\\Python27\\Scripts\\finnish2\\textfields.txt','r','utf–8')
word=Ft.readlines()
word=smart_str(开发者_开发知识库word, encoding='utf-8', strings_only=False, errors='replace')
word=re.sub('[^äÄöÖåÅA-Za-z0-9]',"""\[^A-Za-z0-9]*""", word) ; print 'word= ', word #works in skipping ö,ä,å characters
I would like that these character would be included to [A-Za-z]. How to define this?
[A-Za-z0-9]
will only match the characters listed here, but the docs also mention some other special constructs like:
\w
which stands for alphanumeric characters (namely[a-zA-Z0-9_]
plus all unicode characters which are declared to be alphanumeric\W
which stands for all nun-alphanumeric characters[^a-zA-Z0-9_]
plus unicode\d
which stands for digits\b
which matches word boundaries (including all rules from the unicode tables)
So, you will to (a) use this constructs instead (which are shorter and maybe easier to read), and (b) tell re that you want to "localize" those strings with the current locale by setting the UNICODE flag like:
re_word = re.compile(r'\w+', re.U)
For a start, you appear to be slightly confused about the args for re.sub.
The first arg is the pattern. You have '[^äÄöÖåÅA-Za-z0-9]' which matches each character which is NOT in the Finnish alphabet nor a digit.
The second arg is the replacement. You have """[^A-Za-z0-9]*""" ... so each of those non-Finnish-alphanumeric characters is going to be replaced by the literal string [^A-Za-z0-9]*
. It's reasonable to assume that this is not what you want.
What do you want to do?
You need to explain your third line; after your first 2 lines,
word
will be a list ofunicode
objects, which is A Good Thing. However theencoding=
and theerrors=
indicate that the unknown (to us) smart_str() is converting your lovely unicode back to UTF-8. Processing data in UTF-8 bytes instead of Unicode characters is EVIL, unless you know what you are doing.What encoding directive do you have at the top of your source file?
Advice: Get your data into unicode. Work on it in unicode. All your string constants should have the
u
prefix; if you consider that too much wear and tear on your typing fingers, at least put it on the non-ASCII constants e.g.u'[^äÄöÖåÅA-Za-z0-9]'
. When you have done all the processing, encode your results for display or storage using an appropriate encoding.When working with
re
, consider\w
which will match any alphanumeric (and also the underscore) instead of listing out what is alphabetic in one language. Do use the re.UNICODE flag; docs here.
Something like this might do the trick:
pattern = re.compile("(?u)pattern")
or
pattern = re.compile("pattern", re.UNICODE)
精彩评论