Caching options in Python or speeding up urlopen
Hey all, I have a si开发者_如何学Pythonte that looks up info for the end user, is written in Python, and requires several urlopen commands. As a result it takes a bit for a page to load. I was wondering if there was a way to make it faster? Is there an easy Python way to cache or a way to make the urlopen scripts fun last?
The urlopens access the Amazon API to get prices, so the site needs to be somewhat up to date. The only option I can think of is to make a script to make a mySQL db and run it ever now and then, but that would be a nuisance.
Thanks!
httplib2 understands http request caching, abstracts urllib/urllib2's messiness somewhat and has other goodies, like gzip support.
http://code.google.com/p/httplib2/
But besides using that to get the data, if the dataset is not very big, I would also implement some kind of function caching / memoizing. Example: http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize
It wouldn't be too hard to modify that decorator to allow for time based expiry, e.g. only cache the result for 15 mins.
If the results are bigger, you need to start looking into memcached/redis.
There are several things you can do.
The
urllib
caching mechanism is temporarily disabled, but you could easily roll your own by storing the data you get from Amazon in memory or in a file somewhere.Similarly to the above, you could have a separate script that refreshes the prices every so often, and
cron
it to run every half an hour (say). These could be stored wherever.You could run the URL fetching in a new thread/process, since it is mostly waiting anyway.
How often do the price(s) change? If they're pretty constant (say once a day, or every hour or so), just go ahead and write a cron script (or equivalent) that retrieves the values and stores it in a database or text file or whatever it is you need.
I don't know if you can check the timestamp data from the Amazon API - if they report that sort of thing.
You could use memcached. It is designed for that, and this way you could easily share the cache with different program/scripts. And it is really easy to use from Python, check:
Good examples of python-memcache (memcached) being used in Python?
Then you update the memcached when a key is not there and also from some cron script, and you're ready to go.
Another, simpler, option would be to cook you own cache, probably storing the data in a dictionary and/or using cPickle to serialize it to disk (if you want the data to be shared between different runs).
If you need to grab from multiple sites at once you might try whit asyncore http://docs.python.org/library/asyncore.html
This way you can easily load multiple pages at once.
精彩评论