开发者

How to stay alive in HTTP/1.1 using python urllib

For now I am doing this: (Python3, urllib)

url = 'someurl'
headers = '(('HOST', 'somehost'), /  
     开发者_开发知识库       ('Connection', 'keep-alive'),/
            ('Accept-Encoding' , 'gzip,deflate'))
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor())
for h in headers:
    opener.addheaders.append(x)
data = 'some logging data' #username, pw etc.
opener.open('somesite/login.php, data)

res = opener.open(someurl)
data = res.read()
... some stuff here...
res1 = opener.open(someurl2)
data = res1.read()
etc.

What is happening is this;

I keep getting gzipped responses from server and I stayed logged in (I am fetching some content which is not available if I were not logged in) but I think the connection is dropping between every request opener.open;

I think that because connecting is very slow and it seems like there is new connection every time. Two questions:

a)How do I test if in fact the connection is staying-alive/dying

b)How to make it stay-alive between request for other urls ?

Take care :)


This will be a very delayed answer, but:

You should see urllib3. It is for Python 2.x but you'll get the idea when you see their README document.

And yes, urllib by default doesn't keep connections alive, I'm now implementing urllib3 for Python 3 to be staying in my toolbag :)


Just if you didn't know yet, python-requests offer keep-alive feature, thanks to urllib3.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜