开发者

Does my code leak memory (python)?

    links_list = char.getLinks(words)
    for source_url in links_list:
        try:
            print 'Downloading URL: ' + source_url
            urldict = hash_url(source_url)
            source_url_short = urldict['url_short']
            source_url_hash = urldict['url_short_hash']
 开发者_高级运维           if Url.objects.filter(source_url_short = source_url_short).count() == 0:
                    try:
                        htmlSource = getSource(source_url)
                    except:
                        htmlSource = '-'
                        print '\thtmlSource got an error...'
                new_u = Url(source_url = source_url, source_url_short = source_url_short, source_url_hash = source_url_hash, html = htmlSource)
                new_u.save()
                time.sleep(3)
            else:
                print '\tAlready in database'
        except:
            print '\tError with downloading URL..'
            time.sleep(3)
            pass


def getSource(theurl, unicode = 1, moved = 0):
    if moved == 1:
        theurl = urllib2.urlopen(theurl).geturl()
    urlReq = urllib2.Request(theurl)
    urlReq.add_header('User-Agent',random.choice(agents))
    urlResponse = urllib2.urlopen(urlReq)
    htmlSource = urlResponse.read()
    htmlSource =  htmlSource.decode('utf-8').encode('utf-8')
    return htmlSource

basically what this code does is...it takes a list of URLs and downloads them, saves them to a DB. That's all.


maybe your process uses too much memory and the server (perhaps shared host) just kills it because you exhaust your memory quota.

here you use a call that may eat up a lot of memory:

links_list = char.getLinks(words)
for source_url in links_list:
     ...

Looks like you might be building a whole list in memory and then work with items. Instead it might be better to use iterator, where objects are retrieved one at at time. But this is a guess because it's hard to tell from your code what char.getLinks does

if you are using Django in debug mode, then memory usage will go up, as Mark suggests.


If you are doing this in Django, make sure DEBUG is False, otherwise it will cache every query.

See FAQ


The easiest way to check is to go to the task manager (on Windows - or equivalent on other platforms) and check the memory requirements of the Python process. If it stays constant, there are no memory leaks. If not, you have a memory leak somewhere and you will need to debug


Perhaps you should get a job server such as beanstalkd and think about doing just one at a time.

The job server will requeue the ones that fail, allowing the rest to complete. You can also run more than one client concurrently should you need to (even on more than one machine).

Simpler design, easier to understand and test, more fault tolerant, retryable, more scalable, etc...

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜