Is there a way to force lxml to parse Unicode strings that specify an encoding in a tag?
I have an XML file that specifies an encoding, and I use UnicodeDammit to开发者_JAVA技巧 convert it to unicode (for reasons of storage, I can't store it as a string). I later pass it to lxml but it refuses to ignore the encoding specified in the file and parse it as Unicode, and it raises an exception.
How can I force lxml to parse the document? This behaviour seems too restrictive.
You cannot parse from unicode strings AND have an encoding declaration in the string.
So, either you make it an encoded string (as you apparently can't store it as a string, you will have to re-encode it before parsing. Or you serialize the tree as unicode with lxml yourself: etree.tostring(tree, encoding=unicode)
, WITHOUT xml declaration. You can easily parse the result again with etree.fromunicode
see http://lxml.de/parsing.html#python-unicode-strings
Edit: If, apparently, you already have the unicode string, and can't control how that was made. You'll have to encode it again, and provide the parser with the encoding you used:
utf8_parser = etree.XMLParser(encoding='utf-8')
def parse_from_unicode(unicode_str):
s = unicode_str.encode('utf-8')
return etree.fromstring(s, parser=utf8_parser)
This will make sure that, whatever was inside the xml declaration gets ignored, because the parser will always use utf-8.
Basically, the solution is to do:
if isinstance(mystring, unicode):
mystring = mystring.encode("utf-8")
Seriously. Good job, lxml.
EDIT: It turns out that, in this instance, lxml autodetects the encoding incorrectly. It appears that I will have to manually search for and remove "charset" and "encoding" from the page.
The solution is NOT reencoding the string. The encoding declaration inside the string can say something other than UTF8. Don't blindly re-encode to utf8 and expect it to work all the time.
The solution is to just strip the encoding declaration. You already have an unicode string at hand, it's not needed anymore!
# this is from lxml/apihelpers.pxi
RE_XML_ENCODING = re.compile(
ur'^(<\?xml[^>]+)\s+encoding\s*=\s*["\'][^"\']*["\'](\s*\?>|)', re.U)
RE_XML_ENCODING.sub("", broken_xml_string, count=1)
The worst case (where no xml encoding declaration is found) time complexity here is O(n), which is quite bad (but still better than blindly encoding to binary) so I'm open to suggestions here.
PS: Some interesting analyses of xml encoding problem:
default encoding for XML is UTF-8 or UTF-16?
How default is the default encoding (UTF-8) in the XML Declaration?
I had an existing implementation and I needed to have the tree. I also had a nbsp; issue in a meta tag. Setting resolve_entities to false fixed that issue.
opener = urllib.request.build_opener()
response = opener.open(url['url'])
raw_page = response.read()
response.close()
parsed_page = raw_page.replace(b'encoding="UTF-8"',b'')
parsed_page = StringIO(parsed_page.decode('ASCII'))
parser = ET.XMLParser(resolve_entities = False, encoding="ASCII")
tree = ET.parse(parsed_page, parser)
root = tree.getroot()
精彩评论