开发者

json.dumps throws UnicodeDecodeError

I had unknown encoding in a MS ACCE开发者_运维技巧SS data table because the text inputs originated by people copying and pasting from Word documents.

So when I attempted:

final_data_to_write = json.dumps(list_of_text_lines) 

The error output was:

 "UnicodeDecodeError: 'utf8' codec can't decode byte 0xe1 in position 5: unexpected end of data"


You need to find out what character encoding is used in your database. Then you need to tell JSON encoder to use that encoding to properly interpret strings.

final_data_to_write = json.dumps(myDict, encoding="XXX")

Default encoding assumed by json module is UTF-8.


The conversion from Access to Excel should have preserved your data as Unicode. If all the Unicode text is encodable in your "ANSI code page" (probably cp1252, but don't guess), then you won't have mangled anything by doing Excel "save-as-CSV" --- if not, you'd get ? characters replacing the non-encodable characters. These wouldn't cause your current problem.

Things to do:

(1) Find out what is your "ANSI code page":

On my machine:

command_prompt>\python27\python -c"import locale;print locale.setlocale(locale.LC_ALL,'')"
English_Australia.1252

So mine is cp1252.

(2) Try json.dumps(myDict, encoding='cpXXXX')

(3) If that fails, you need to look at your data and your CSV-to-JSON code to see if you are mangling something somewhere. Insert some debugging code to output the line numbers of any lines that contain any non-ASCII characters -- the test for that is any(c >= '\x80' for c in line) -- and look at those lines in a text editor and check if they make sense in your environment.


Loop over the text lines and encode each one as follows:

row1 = unicode( list_of_text_line[j] , errors='ignore') 
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜