开发者

What is the best way to determine/convert the encoding of an external HTML file?

I am parsing HTML from ~100 different domains. I could check what encoding each domain uses & do things that way, but that seems dumb.

Usually the encoding is in the header tags yeah? but not always I gather. so I may need to run some regex? or use some mb_ functions. Or perhaps use cURL? All the examples I've found so far are for XML & now I've got a headache.

Yes also I am using the DOMDocument 开发者_Python百科class to find what I want. And that is all working great. Except for the encoding.


According to the W3C internationalization standards, you should follow these priorities in order to get the encoding of an HTML/XML document:

  • Content-Type header (from the HTTP response)
  • XML or XHTML declaration, e.g.: <?xml version="1.0" encoding="utf-8" ?>
  • meta tag with http-equiv="Content-Type" (from the HTML header)

In my experience, when all that fails, you can assume encoding is most probably ISO-8859-1 or CP1252. You can decode content using the iconv library, e.g.: iconv("UTF-8", "ISO-8859-1", $content).

If you use the cURL library to fetch the URLs, you can get the Content Type header with: curl_getinfo($ch, CURLINFO_CONTENT_TYPE). The other tags can be extracted with an XML/HTML parser.


You can parse a meta tag which any responsible programmer should have included in the <head> element.

<meta http-equiv="content-type" 
        content="text/html;charset=utf-8" />

You can also choose to reject any html which does not have the charset in the header or in a meta tag.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜