translate by replacing words inside existing text
What are common approaches for translating certain words (or expressions) inside a given text, when the text must be reconstructed (with punctuations and everythin.) ?
The translation comes from a lookup table, and covers words, collocations, and emoticons like L33t, CUL8R, :-), etc.
Simple string search-and-replace is not enough since it can replace part of longer words (cat > dog ≠> caterpillar > dogerpillar).
Assume the following input:
s = "dogbert, started a dilbert dilbertion proces cat-bert :-)"
after translation, i should receive something like:
result = "anna, started a geo开发者_运维技巧rge dilbertion process cat-bert smiley"
I can't simply tokenize, since i loose punctuations and word positions.
Regular expressions, works for normal words, but don't catch special expressions like the smiley :-) but it does .
re.sub(r'\bword\b','translation',s) ==> translation
re.sub(r'\b:-\)\b','smiley',s) ==> :-)
for now i'm using the above mentioned regex, and simple replace for the non-alphanumeric words, but it's far from being bulletproof.
(p.s. i'm using python)
The reason your smiley example doesn't work with regex is the \b refers to a word boundary. Since there are no "word" characters in the smiley, there is no word boundary, so your expression doesn't match. You could use lookaheads/lookbehinds to see if you are bounded by spaces, but to check against punctuation could be difficult considering your smileys are made of punctuation.
I had a similar problem with standard emoticons to be replaced with values. Here is a list of emoticons. I had them in a plain text file (so that I can append/delete to it as and when required) separated by tab like.
:[ -1
:/ -1
:( -1
:) 1
Then read it into a dictionary
emoticons = {}
for line in open('data/emoticons.txt').xreadlines():
symbol, value = line.split('\t')
emoticons[str(symbol)] = int(value)
Then a lookup function
def mark_emoticons(t):
for w, v in emoticons.items():
match = re.search(re.escape(w),t)
if match:
print w, "found "
Call the function with
mark_emoticons('Hello ladies! How are you? Fantastic :) Look at your man ...')
As for L33t-speak I have a separate file slangs.txt, which looks like
u you
ur you are
uw you are welcome
wb welcome back
wfm works for me
wtf what the fuck
A similar function to read it to dictionary slangs{} and a similar function to replace the slangs.
def mark_slangs(t):
for w, v in slangs.items():
s = r'\b' + w + r'\b'
match = re.search(s,t)
if match:
#print w, "found in:",t, "replacing with",readtable.slangs[w]
t = re.sub(w,slangs[w].rstrip(),t)
...
From Python library the re.escape()
re.escape(string) Return string with all non-alphanumerics backslashed; this is useful if you want to match an arbitrary literal string that may have regular expression metacharacters in it.
Based on your needs you might want to use re.findall()
The problem is not that regexp can't match smileys (which is simply not true :P), but rather how your regular expression for that smiley is made.
The word boundary \b
is described as follows in the python documentation:
Matches the empty string, but only at the beginning or end of a word. A word is defined as a sequence of Unicode alphanumeric or underscore characters, so the end of a word is indicated by whitespace or a non-alphanumeric, non-underscore Unicode character. Note that formally, \b is defined as the boundary between a \w and a \W character (or vice versa).
The problem now is that symbols like :
, -
and )
are itself word-boundaries, so they are especially not words, and as such won't match with \w
. As such the space before the smiley is not recognized as a word boundary (simply because no word is following).
So if you want to match smileys you can't use \b
but have to check for white spaces or something instead.
If you're looking for a non-regex solution, then here is my idea. Here are the steps I would use.
Preparation:
- Create a dictionary linking the words to be replaced to their replacements.
- Create a ternary tree of the words to be replaced.
Searching and replacing:
- Split up the words by the spaces using split(). I use the term word to refer to a group of letters that doesn't contain a space.
- Iterate through all of the words
- Search for the word in the ternary tree - if a partial match is found, check that the rest of the word is punctuation (or at least not stuff that would make it not be a match).
- Replace the word using the dictionary look-up if it was found in the ternary tree
You can read about ternary search trees here. There are ternary search tree python implementations, but you can make your own pretty simply. The main problem with this approach is if there is punctuation before the word (like a "), but that can be dealt with easily.
精彩评论