开发者

Why is Python's Hashlib not strongly typed?

Python is supposed to be strongly typed.

For instance: 'abc'['1'] won't work, because you're expected to provide an integer there, not a string. An error wil be raised and you can go on and correct it.

But that's not the case with hashlib. Indeed, try the following:

import hashlib
hashlib.md5('abc') #Works OK        

hashlib.md5(1) 
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: md5() argument 1 must be string or read-only buffer, not int

hashlib.md5(u'abc') #Works, but shouldn't : this is unicode, not str.

haslib.md5(u'é')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 0: ordinal not in range(128)

Of course, it does not fail because of a TypeError, but because of UnicodeEncodeError. UnicodeEncodeError is supposed to be raised when you attempt to encode unicode to a string.

I think I'm not too far from the truth when my guess is that Hashlib silently attempted to convert unicode to a string.

Now. I agree, hashlib indicated that the argument to hashlib.md5() should be a string or a read-only buffer, which a unicode string is. But this actually goes to show that it actually is not: hashlib.md开发者_开发知识库5() will work properly with strings and that's about it.

Of course, the main problem this causes is that you will get an exception with some unicode strings, and not with some others.

Which leads me to my questions. First, do you have an explanation as to why hashlib implements this behavior? Second, is it considered an issue? Third, is there a way to fix this without changing the module itself?

Hashlib is basically an example, there are several other modules that behave the same when provided unicode strings - which leads you to an uncomfortable situation where your program will work with ASCII input but completely fail with accents.


It's not just hashlib - Python 2 handles Unicode in a number of places by trying to encode it as ascii. This was one of the big changes made for Python 3.

In Python 3, strings are unicode, and they behave as you expect: there's no automatic conversion to bytes, and you have to encode them if you want to use bytes (e.g. for MD5 hashing). I believe there are hacks using sys.setdefaultencoding that enable this behaviour in Python 2, but I'd advise against using them in production, because they'll affect any code running in that Python instance.


This is a result of the Python 2.x C API making it convenient to pass Unicode objects in to C APIs expecting a string.

See the PyArg_ParseTuple* call in _hashopenssl.c.

It will attempt to encode a Unicode object into a byte string when parsing it for the 's*' argument. If it cannot be encoded, the error will be raised. The correct thing to do is to always call .encode('utf-8') or whatever other codec your application demands before attempting to use anything Unicode in a context where only a raw byte stream makes sense.

Python 3.x fixes this. Instead you will always get a friendly:

TypeError: Unicode-objects must be encoded before hashing

Instead of any automatic encoding.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜