type enforcement on _ssl.sslwrap function params
The _ssl.sslwrap function appears to check to see if the sock passed in is a subclass of _socket.socket. I am passing in a class that implements the interface of _socket.socket.
It gets mad because my socket isn't a subclass. Is this something I should fix on my side, or is this something that I should ask about from the python-dev guys?
Here i开发者_JAVA技巧s the code from ssl.SSLSocket.init that is giving me grief:
self._sslobj = _ssl.sslwrap(self._sock, server_side,
keyfile, certfile,
cert_reqs, ssl_version, ca_certs,
ciphers)
In my case, self._sock is an instance of my custom socket class.
UPDATE:
I am going to look into how twisted does some of this stuff. My fake socket is getting too complicated. However, I am still curious why the _ssl module is enforcing the socket type the way it is.
I agree that explicitly enforcing the type hierarchy seems un-Pythonic and that you might want to ask the developers about that.
OTOH, I wonder if it has to do with _ssl and _socket being the implementation modules for ssl and socket. I haven't used ssl, and I've barely used socket, but is it actually routinely necessary when using them to directly use _ssl or _socket?
Anyway, in the meantime, one workaround might be a proxy object that:
(a) inherits from _socket.socket
(and thus does claim that it isinstance
of _socket.socket
), yet
(b) passes all its messages on to your actual "socket interface" compliant object.
I haven't tested this code, so I hope it is not an ignoble contribution:
def socketify( socket_protocol_compliant_object ):
import _socket
class proxy( _socket.socket ):
def __init__( self ): pass
def __getattribute__( self, attr_name ):
return getattr( socket_protocol_compliant_object, attr_name )
def __setattribute__( self, attr_name, new_value ):
setattr( socket_protocol_compliant_object, attr_name, new_value )
return proxy()
self._sslobj = _ssl.sslwrap( socketify(self._sock), server_side, keyfile, ... )
What do other Pythonists think? Is this a good idea?
精彩评论