开发者

Encoding in python with lxml - complex solution

I need to download and parse webpage with lxml and build UTF-8 xml output. I think schema in pseudocode is more illustrative:

from lxml import etree

webfile = urllib2.urlopen(url)
root = etree.parse(webfile.read(), parser=etree.HTMLParser(recover=True))

txt = my_process_text(etree.tostring(root.xpath('/html/body'), encoding=utf8))


output = etree.Element("out")
output.text = txt

outputfile.write(etree.tostring(output, encoding=utf8))

So webfile can be in any encoding (lxml should handle this). Outputfile have to be in utf-8. I'm not sure where to use encoding/coding. Is this schema ok? (I cant find good 开发者_StackOverflow社区tutorial about lxml and encoding, but I can find many problems with this...) I need robust solution.

Edit:

So for sending utf-8 to lxml I use

        converted = UnicodeDammit(webfile, isHTML=True)
        if not converted.unicode:
            print "ERR. UnicodeDammit failed to detect encoding, tried [%s]", \
                ', '.join(converted.triedEncodings)
            continue
        webfile = converted.unicode.encode('utf-8')


lxml can be a little wonky about input encodings. It is best to send UTF8 in and get UTF8 out.

You might want to use the chardet module or UnicodeDammit to decode the actual data.

You'd want to do something vaguely like:

import chardet
from lxml import html
content = urllib2.urlopen(url).read()
encoding = chardet.detect(content)['encoding']
if encoding != 'utf-8':
    content = content.decode(encoding, 'replace').encode('utf-8')
doc = html.fromstring(content, base_url=url)

I'm not sure why you are moving between lxml and etree, unless you are interacting with another library that already uses etree?


lxml encoding detection is weak.

However, note that the most common problem with web pages is the lack of (or the existence of incorrect) encoding declarations. It is therefore often sufficient to only use the encoding detection of BeautifulSoup, called UnicodeDammit, and to leave the rest to lxml's own HTML parser, which is several times faster.

I recommend to detect encoding using UnicodeDammit and parse using lxml. Also, you can use http header Content-Type (you need to extract charset=ENCODING_NAME) to detect encoding more precisely.

For this example i'm using BeautifulSoup4 (also you have to install chardet for better autodetection, because UnicodeDammit uses chardet internally):

from bs4 import UnicodeDammit

if http_charset == "":
    ud = UnicodeDammit(content, is_html=True)
else:
    ud = UnicodeDammit(content, override_encodings=[http_charset], is_html=True)
root = lxml.html.fromstring(ud.unicode_markup)

OR, to make previous answer more complete, you can modify it to:

if ud.original_encoding != 'utf-8':
    content = content.decode(ud.original_encoding, 'replace').encode('utf-8')

Why this is better than simple using chardet?

  1. You do not ignore Content-Type HTTP header

    Content-Type:text/html; charset=utf-8

  2. You do not ignore http-equiv meta tag. Example:

    ... http-equiv="Content-Type" content="text/html; charset=UTF-8" ...

  3. On top of this, you are using power of chardet, cjkcodecs and iconvcodec codecs and many more.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜