开发者

Is it possible to detect text file encoding of two possible?

I read How can I detect the encoding/codepage of a text file It's not possible to detect encoding. However is it possible to detect whether encoding is one of two allowed?

For example I allow user to use Unicode UTF-8 and iso-8859-2 for their csv files. Is it possible to detect whether it is former开发者_开发技巧 or latter?


For example I allow user to use Unicode UTF-8 and iso-8859-2 for their csv files. Is it possible to detect whether it is former or latter?

It's not possible with 100% accuracy because, for example, the bytes C3 B1 are an equally valid representation of "Ăą" in ISO-8859-2 as they are of "ñ" in UTF-8. In fact, because ISO-8859-2 assigns a character to all 256 possible bytes, every UTF-8 string is also a valid ISO-8859-2 string (representing different characters if non-ASCII).

However, the converse is not true. UTF-8 has strict rules about what sequences are valid. More than 99% of possible 8-octet sequences are not valid UTF-8. And your CSV files are probably much longer than that. Because of this, you can get good accuracy if you:

  1. Perform a UTF-8 validity check. If it passes, assume the data is UTF-8.
  2. Otherwise, assume it's ISO-8859-2.

However is it possible to detect whether encoding is one of two allowed?

UTF-32 (either byte order), UTF-8, and CESU-8 can be reliably detected by validation. UTF-16 can be detected by presence of a BOM (but not by validation, since the only way for an even-length byte sequence to be invalid UTF-16 is to have unpaired surrogates).

If you have at least one "detectable" encoding, then you can check for the detectable encoding, and use the undetectable encoding as a fallback.

If both encodings are "undetectable", like ISO-8859-1 and ISO-8859-2, then it's more difficult. You could try a statistical approach like chardet uses.


Since it is impossible to detect the encoding, you still cannot detect it even when you limit it down to two possible encodings.

The only thing that I can think of is that you could try encoding it in one of the two possible encodings, but then you would have to check to see if it came out right. This would involve parsing of the text and even then you would not be 100% certain if it was right.


Both of those encodings share the same meaning for all octets <128.

So you would need to look at octets >= 128 to make the determination. Since in UTF-8 octets >= 128 always occur in groups (for 2 octet on longer sequences to encode a single code point) then a three octet sequence {<128, >=128, <128} would be an indication of ISO-8859-2.

If the file contains no, or very few octets outside ASCII (i.e. <128) then your ability to determine will be impossible or limited. Of course if the file starts with a UTF-8 encoded BOM (quite likely if from Windows) then you know it is UTF-8.

It is generally more reliable to use some meta-data (as XML does with its declaration) that relying on a heuristic because it is possible someone has sent you ISO-8859-3.


If you use a StreamReader there is an overload which will detect the encoding if possible (BOM) but defaults to UTF8 if detection fails.

I would suggest you use two options (UTF8 or Current) and if the user selects Current you use

var encoding = Encoding.GetEncoding(
      CultureInfo.CurrentCulture.TextInfo.OEMCodePage);
var reader = new StreamReader(encoding);

which will most hopefully be the right encoding.


See my (recent) answer to the linked question: How can I detect the encoding/codepage of a text file

This class will check whether it is possible that the file is UTF-8, and then it will attempt to guess whether it is probable.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜