How to know the encoding of a file in Python? [duplicate]
Does anybody know how to get the encoding of a file in Python. I know that you can use the codecs module to open a file with a specific encoding but you have to know it in advance.
import codecs
f = codecs.open("file.txt", "r", "utf-8")
Is there a way to detect automatically which encoding is used for a file?
Thanks in advance
Edit: Thanks everybody for very interesting answsers. You may also be interested by http://whatismyencoding.com/ which is based on chardet (more over the site is powered by bottle python framework)
Unfortunately there is no 'correct' way to determine the encoding of a file by looking at the file itself. This is a universal problem, not limited to python or any particular file system.
If you're reading an XML file, the first line in the file might give you a hint of what the encoding is.
Otherwise, you will have to use some heuristics-based approach like chardet (one of the solutions given in other answers) which tries to guess the encoding by examining the data in the file in raw byte format. If you're on Windows, I believe the Windows API also exposes methods to try and guess the encoding based on the data in the file.
You may use BOM (http://en.wikipedia.org/wiki/Byte_order_mark) to detect the encoding, or try this library:
https://github.com/chardet/chardet
Here is a small snippet to help you to guess the encoding. It guesses between latin1 and utf8 quite good. It converts a byte string to a unicode string.
# Attention: Order of encoding_guess_list is import. Example: "latin1" always succeeds.
encoding_guess_list=['utf8', 'latin1']
def try_unicode(string, errors='strict'):
if isinstance(string, unicode):
return string
assert isinstance(string, str), repr(string)
for enc in encoding_guess_list:
try:
return string.decode(enc, errors)
except UnicodeError, exc:
continue
raise UnicodeError('Failed to convert %r' % string)
def test_try_unicode():
for start, should in [
('\xfc', u'ü'),
('\xc3\xbc', u'ü'),
('\xbb', u'\xbb'), # postgres/psycopg2 latin1: RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
]:
result=try_unicode(start, errors='strict')
if not result==should:
raise Exception(u'Error: start=%r should=%r result=%r' % (
start, should, result))
There is Unicode Dammit from Beautiful Soup, which uses chardet but adds a couple of extra features.
It tries to read the encoding from inside XML or HTML files. Then it tries looking for a BOM or something like that at the start of the file. If it can't do that, it makes use of chardet.
#!/usr/bin/python
"""
Line by line detecting encoding if input and then convert it into UTF-8
Suitable for look at logs with mixed encoding (i.e. from mail systems)
"""
import sys
import chardet
while 1:
l = sys.stdin.readline()
e = chardet.detect(l)
u = None
try:
if e['confidence'] > 0.3:
u = unicode(l, e['encoding'])
except:
pass
if u:
print u,
else:
print l,
精彩评论