converting a string which contains both utf-8 encoded bytestrings and codepoints to utf-8 encoded string
I' m getting a JSON response from an API which looks like this:
{"excerpt":"...where we\u00e2\u0080\u0099ll just have to wait and see, I\u00e2\u0080\u0099m sure official announcements with start flowing in the coming months \u2013special\u2013..."}
This is the raw JSON response returning from the API call. Now, as you see there are codepoints in that JSON document which is what should be when transferring unicode data. But the API response is returning wrong codepoints because the 'excerpt' is starting as "... where we’ ll ..." at the original source which this excerpt belongs to. As you see \u00e2\u0080\u0099 sequence is used for representing the ’ -right single quotation mark- character but that character 's codepoint is actually \u2019 and the equivalent bytestring encoded as utf-8 is \xe2\x80\x99. So it is returning the respective bytestring instead of codepoint. The other problem is that this response contains right codepoints too like \u2013 (dash character) as in the previous response and it makes my code unable to handle both situations.
I have to eventually fetch some fields from this response(probably using json.loads and that converts the \u00e2\u0080\u0099 to \xe2\x80\x99 but does nothing t开发者_JS百科o \u2013), concatenate those fields and send the result to another library which eventually uses urllib.urlencode to encode that result to a valid utf-8 url parameter for sending it another API.
So here is my question: Is there a way to encode a string which contains both utf-8 bytestrings and unicode codepoints(this is the result of doing json.loads) to another string which contains only codepoints or utf-8 bytestrings so I can use it in urllib.urlencode or may be there is a solution before doing json.loads ? Note: I' m using Python 2.6.1
I have already contacted the API owners and informed them they should use valid codepoints instead of bytestrings but I' m not sure when they will contact me so I' m trying to come up with a solution for current situation.
Any help will be appreciated.
You could use a regular expression to identify "UTF-8-like" Unicode sequences and process them into the correct Unicode character:
import re
D = {"excerpt":"...where we\u00e2\u0080\u0099ll just have to wait and see, I\u00e2\u0080\u0099m sure official announcements with start flowing in the coming months \u2013special\u2013..."}
s = D['excerpt']
print s
s = s.decode('unicode-escape')
print s
print re.sub(ur'[\xc2-\xf4][\x80-\xbf]+',lambda m: m.group(0).encode('latin1').decode('utf8'),s)
Output:
...where we\u00e2\u0080\u0099ll just have to wait and see, I\u00e2\u0080\u0099m sure official announcements with start flowing in the coming months \u2013special\u2013...
...where weâll just have to wait and see, Iâm sure official announcements with start flowing in the coming months –special–...
...where we’ll just have to wait and see, I’m sure official announcements with start flowing in the coming months –special–...
Update...
From your comment, the dictionary is already a Unicode string, so the \u2013 characters print correctly (see first print output below) so the decode('unicode-escape')
can be skipped. The re.sub
statement still works:
import re
D = {u'excerpt':u'...where we\xe2\x80\x99ll just have to wait and see, I\xe2\x80\x99m sure official announcements with start flowing in the coming months \u2013special\u2013...'}
s = D[u'excerpt']
print s
print re.sub(ur'[\xc2-\xf4][\x80-\xbf]+',lambda m: m.group(0).encode('latin1').decode('utf8'),s)
Output:
...where weâll just have to wait and see, Iâm sure official announcements with start flowing in the coming months –special–...
...where we’ll just have to wait and see, I’m sure official announcements with start flowing in the coming months –special–...
精彩评论