开发者

Character encoding issues when generating MD5 hash cross-platform

This is a general question about character encoding when using MD5 libraries in various languages. My concern is: suppose I generate an MD5 hash using a native Python string object, like this:

message = "hello world"
m = md5()
m.update(message)

Then I take a hex version of that MD5 hash using:

m.hexdigest()

and send the message & MD5 hash via a network, let's say, a JMS message or a HTTP request.

Now I get this message in a Java program in the form of a native Java string, along with the checksum. Then I generate an MD5 hash using Java, like this (using the Commons Codec library):

String md5 = org.apache.commons.codec.digest.DigestUtils.DigestUtils.md5Hex(s)

My feeling is that this is wrong because I have not specified character encodng at either end. So the original hash will be based on the bytes of the Python version of the string; the Java one will be based on the bytes of the Java version of the string , these two byte sequences will often not be the same - is that right? So really I need to specify "UTF-8" or whatever at both ends right?

(I am actually getting an intermittent error in my code where the MD5 checksum fails, and I suspect this is the reaso开发者_StackOverflown - but because it's intermittent, it's difficult to say if changing this fixes it or not. )

Thank you!


Yes, you must be explicit as the MD5 checksum is over a sequence of BYTES, not Characters. Therefore you need a predictable translation of characters to bytes.


Yes, it is better to hash the same encoding at both ends. Decode the Python string to a unicode before encoding though.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜