开发者

How can encode('ascii', 'ignore') throw a UnicodeDecodeError?

This line

data = get_url_contents(r[0]).encode('ascii',开发者_开发百科 'ignore')

produces this error

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11450: ordinal not in range(128)

Why? I assumed that because I'm using 'ignore' that it should be impossible to have decode errors when saving the output to a value to a string variable.


Due to a quirk of Python 2, you can call encode on a byte string (i.e. text that's already encoded). In this case, it first tries to convert it to a unicode object by decoding with ascii. So, if get_url_contents is returning a byte string, your line effectively does this:

get_url_contents(r[0]).decode('ascii').encode('ascii', 'ignore')

In Python 3, byte strings don't have an encode method, so the same problem would just cause an AttributeError.

(Of course, I don't know that this is the problem - it could be related to the get_url_contents function. But what I've described above is my best guess)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜