Nested memcache lookups in Python, o(n) good/bad?
Is something like this bad with memcache?
1. GET LIST OF KEYS
2. FOR EACH KEY IN LIST OF KEYS
- GET DATA
I'm expecting the list of keys to be around ~1000 long.
If this is bad, I'm wondering if there is a better way to do this? I figured memcache might be fast enough where such an O(n) query might not be so important. I would never do this in MySQL, for example.
开发者_StackOverflow社区Thanks.
This will be slower than it needs to be, because each request will wait for the previous one to complete before being sent. If there's any latency at all to the memcache server, this will add up quickly: if there's just 100uS of latency (a typical Ethernet round-trip time), these 1000 lookups will take a tenth of a second, which is a long time in many applications.
The correct way of doing this is making batch requests: sending many requests to the server simultaneously, then receiving all of the responses back, so you don't take a latency penalty repeatedly.
The python-memcache module has the get_multi
method to do this for you.
精彩评论