开发者

python urllib2: connection reset by peer

I have a perl program that retrieves data from the dat开发者_运维知识库abase of my university library and it works well. Now I want to rewrite it in python but encounter the problem <urlopen error [errno 104] connection reset by peer>

The perl code is:

    my $ua = LWP::UserAgent->new;
    $ua->cookie_jar( HTTP::Cookies->new() );
    $ua->timeout(30);
    $ua->env_proxy;
    my $response = $ua->get($url); 

The python code I wrote is:

    cj = CookieJar();
    request = urllib2.Request(url); # url: target web page 
    opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj));
    opener = urllib2.install_opener(opener);
    data = urllib2.urlopen(request); 

I use VPN(virtual private network) to log in my university's library at home, and I tried both the perl code and python code. The perl code works as I expected, but the python code always encountered "urlopen error".

I googled for the problem and it seems that the urllib2 fails to load the environmental proxy. But according to the document of urllib2, the urlopen() function works transparently with proxies which do not require authentication. Now I feels quite confusing. Can anybody help me with this problem?


I tried faking the User-Agent headers as Uku Loskit and Mikko Ohtamaa suggested, and solved my problem. The code is as follows:

    proxy = "YOUR_PROXY_GOES_HERE"
    proxies = {"http":"http://%s" % proxy}
    headers={'User-agent' : 'Mozilla/5.0'}
    proxy_support = urllib2.ProxyHandler(proxies)
    opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler(debuglevel=1))
    urllib2.install_opener(opener)

    req = urllib2.Request(url, None, headers)
    html = urllib2.urlopen(req).read()
    print html

Hope it is useful for someone else!


Firstly, as Steve said, you need response.read(), but that's not your problem

import urllib2
response = urllib2.urlopen('http://python.org/')
html = response.read()

Can you give details of the error? You can get it like this:

try:
    urllib2.urlopen(req)
except URLError, e:
     print e.code
     print e.read()

Source: http://www.voidspace.org.uk/python/articles/urllib2.shtml

(I put this in a comment but it ate my newlines)


You might find that the requests module is a much easier-to-use replacement for urllib2.


Did you try specifying the proxy manually?

proxy = urllib2.ProxyHandler({'http': 'your_proxy_ip'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
urllib2.urlopen('http://www.uni-database.com')

if it still fails, try faking your User-Agent headers so as to make it seem that the request is coming from a real browser.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜