python: csv.reader & unicode (and postgres)
I have a CSV with a unicode character (the spanish ñ character). Trying to import this into a utf8-encoded postgres table. The following code:
reader = csv.reader(open(filename, 'r'), delimiter=',')
for row in reader:
values = [None if x == '' else x for x in row]
query = 'INSERT INTO %s.rosters VALUES(%s)' % (self.schema, ','.join(['%s'] * len(values)))
self.executequery(query, values)
Yields ERROR: invalid byte sequence for encoding "UTF8": 0xf1616461
. So, changing it to:
reader = csv.reader(open(filename, 'r'), delimiter=',')
for row in reader:
values = [None if x == '' else unicode(x, 'utf-8') for x in row]
query = 'INSERT INTO %s.rosters VALUES(%s)' % (self.schema, ','.join(['%s'] * len(values)))
self.executequery(query, values)
Yields 'utf8' codec can't decode bytes in position 开发者_高级运维21-24: invalid data
Is there any way to resolve this?
update The file was not UTF-8; it was Windows-1252. Changing the assignment of the values list to:
values = [None if x == '' else unicode(x, 'cp1252') for x in row]
Fixes the issue!
Do you know that the CSV file is encoded in UTF-8? If it is, you'd see something like this:
$ file foo.txt
foo.txt: UTF-8 Unicode text
If it doesn't say UTF-8, then you probably have to decode it with a different codec, such as ISO-8859-1.
精彩评论