开发者

Multiprocessing useless with urllib2?

I recently tried to speed up a little tool (which uses urllib2 to send a request to the (unofficial)twitter-button-count-url (> 2000 urls) and parses it´s results) with the multiprocessing module (and it´s worker pools). I read several discussion here about multithreading (which slowed the whole thing down compared to a standard, non-threaded version) and multiprocessing, but i could´t find an answer to a (probably very simple) question:

Can you speed up url-calls with multiprocessing or ain´t the bottleneck s开发者_如何学JAVAomething like the network-adapter? I don´t see which part of, for example, the urllib2-open-method could be parallelized and how that should work...

EDIT: THis is the request i want to speed up and the current multiprocessing-setup:

 urls=["www.foo.bar", "www.bar.foo",...]
 tw_url='http://urls.api.twitter.com/1/urls/count.json?url=%s'

 def getTweets(self,urls):
    for i in urls:
        try:
            self.tw_que=urllib2.urlopen(tw_url %(i))
            self.jsons=json.loads(self.tw_que.read())
            self.tweets.append({'url':i,'date':today,'tweets':self.jsons['count']})
        except ValueError:
            print ....
            continue
    return self.tweets 

 if __name__ == '__main__':
    pool = multiprocessing.Pool(processes=4)            
    result = [pool.apply_async(getTweets(i,)) for i in urls]
    [i.get() for i in result]


Ah here comes yet another discussion about the GIL. Well here's the thing. Fetching content with urllib2 is going to be mostly IO-bound. Native threading AND multiprocessing will both have the same performance when the task is IO-bound (threading only becomes a problem when it's CPU-bound). Yes you can speed it up, I've done it myself using python threads and something like 10 downloader threads.

Basically you use a producer-consumer model with one thread (or process) producing urls to download, and N threads (or processes) consuming from that queue and making requests to the server.

Here's some pseudo-code:

# Make sure that the queue is thread-safe!!

def producer(self):
    # Only need one producer, although you could have multiple
    with fh = open('urllist.txt', 'r'):
        for line in fh:
            self.queue.enqueue(line.strip())

def consumer(self):
    # Fire up N of these babies for some speed
    while True:
        url = self.queue.dequeue()
        dh = urllib2.urlopen(url)
        with fh = open('/dev/null', 'w'): # gotta put it somewhere
            fh.write(dh.read())

Now if you're downloading very large chunks of data (hundreds of MB) and a single request completely saturates the bandwidth, then yes running multiple downloads is pointless. The reason you run multiple downloads (generally) is because requests are small and have a relatively high latency / overhead.


Take a look at a look at gevent and specifically at this example: concurrent_download.py. It will be reasonably faster than multiprocessing and multithreading + it can handle thousands of connections easily.


It depends! Are you contacting different servers, are the transferred files small or big, do you loose much of the time waiting for the server to reply or by transferring data,...

Generally, multiprocessing involves some overhead and as such you want to be sure that the speedup gained by parallelizing the work is larger than the overhead itself.

Another point: network and thus I/O bound applications work – and scale – better with asynchronous I/O and an event driven architecture instead of threading or multiprocessing, as in such applications much of the time is spent waiting on I/O and not doing any computation.

For your specific problem, I would try to implement a solution by using Twisted, gevent, Tornado or any other networking framework which does not use threads to parallelize connections.


What you do when you split web requests over several processes is to parallelize the network latencies (i.e. the waiting for responses). So you should normally get a good speedup, since most of the processes should sleep most of the time, waiting for an event.

Or use Twisted. ;)


Nothing is useful if your code is broken: f() (with parentheses) calls a function in Python immediately, you should pass just f (no parentheses) to be executed in the pool instead. Your code from the question:

#XXX BROKEN, DO NOT USE
result = [pool.apply_async(getTweets(i,)) for i in urls]
[i.get() for i in result]

notice parentheses after getTweets that means that all the code is executed in the main thread serially.

Delegate the call to the pool instead:

all_tweets = pool.map(getTweets, urls)

Also, you don't need separate processes here unless json.loads() is expensive (CPU-wise) in your case. You could use threads: replace multiprocessing.Pool with multiprocessing.pool.ThreadPool -- the rest is identical. GIL is released during IO in CPython and therefore threads should speed up your code if most of the time is spent in urlopen().read().

Here's a complete code example.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜