Python urllib over TOR? [duplicate]
Sample code:
#!/usr/bin/python
import socks
import socket
import urllib2
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4, "127.0.0.1", 9050, True)
socket.socket = socks.socksocket
print urllib2.urlopen("http://almien.co.uk/m/tools/net/ip/").read()
TOR is running a SOCKS proxy on port 9050 (its default). The request goes through TOR, surfacing at an IP address other than my own. However, TOR console gives the warning:
"Feb 28 22:44:26.233 [warn] Your application (using socks4 to port 80) is giving Tor only an IP address. Applications that do DNS resolves themselves may leak information. Consider using Socks4A (e.g. via privoxy or socat) instead. For more information, please see https://wiki.torproject.org/TheOnionRouter/TorFAQ#SOCKSAndDNS."
i.e. DNS lookups aren't going through the proxy. But that's what the 4th parameter to setdefaultproxy is supposed to do, right?
From http://socksipy.sourceforge.net/readme.txt:
setproxy(proxytype, addr[, port[, rdns[, username[, password]]]])
rdns - This is a boolean flag than modifies the behavior regarding DNS resolving. If it is set to True, DNS resolving will be preformed remotely, on the server.
Same effect with both PROXY_TYPE_SOCKS4 and PROXY_TYPE_SOCKS5 selected.
It can't be a local DNS cache (if urllib2 even supports that) because it happens when I change the URL to a domain that this computer has never visited before.
The problem is that httplib.HTTPConnection
uses the socket
module's create_connection
helper function which does the DNS request via the usual getaddrinfo
method before connecting the socket.
The solution is to make your own create_connection
function and monkey-patch it into the socket
module before importing urllib2
, just like we do with the socket
class.
import socks
import socket
def create_connection(address, timeout=None, source_address=None):
sock = socks.socksocket()
sock.connect(address)
return sock
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050)
# patch the socket module
socket.socket = socks.socksocket
socket.create_connection = create_connection
import urllib2
# Now you can go ahead and scrape those shady darknet .onion sites
The problem is that you are importing urllib2
before you set up the socks connection.
Try this instead:
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS4, '127.0.0.1', 9050, True)
socket.socket = socks.socksocket
import urllib2
print urllib2.urlopen("http://almien.co.uk/m/tools/net/ip/").read()
Manual request example:
import socks import urlparse SOCKS_HOST = 'localhost' SOCKS_PORT = 9050 SOCKS_TYPE = socks.PROXY_TYPE_SOCKS5 url = 'http://www.whatismyip.com/automation/n09230945.asp' parsed = urlparse.urlparse(url) socket = socks.socksocket() socket.setproxy(SOCKS_TYPE, SOCKS_HOST, SOCKS_PORT) socket.connect((parsed.netloc, 80)) socket.send('''GET %(uri)s HTTP/1.1 host: %(host)s connection: close ''' % dict( uri=parsed.path, host=parsed.netloc, )) print socket.recv(1024) socket.close()
I've published an article with complete source code showing how to use urllib2 + SOCKS + Tor on http://blog.databigbang.com/distributed-scraping-with-multiple-tor-circuits/
Hope it solves your issues.
精彩评论