Redis performance issues?
I was trying to put some heavy load on my Redis for testing purposes and find out any upper limits. First I 开发者_StackOverflowloaded it with 50,000 and 100,000 keys of size 32characters with values around 32 characters. It took no more than 8-15 seconds in both key sizes. Now I try to put 4kb of data as value for each key. First 10000 keys take 800 milli seconds to set. But from that point it slows down gradually and to set whole 50,000 keys it takes aroudn 40 minutes. I am loading the database using NodeJs with node_redis (Mranney) . Is there any mistake I am doing or is Redis just that slow with big values of size 4 KB?
One more thing I found now is when I run another client parallel to the current one and update keys this 2nd client finishes up loading the 50000 keys with 4kb values within 8 seconds while the first client still does its thing forever. Is it a bug in node or the redis library? This is alarming and not acceptable for production.
You'll need to get some kind of back pressure for doing bulk writes from node into Redis. By default, node will queue all writes and does not enforce an upper bound on the outgoing queue size.
node_redis has a "drain" event that you can listen for to implement some rudimentary back pressure.
The default redis configuration is not optimized for that sort of usage. I suspect you have it swapping to disk with a page size of 32 bytes, which means that each key added has to find 128 contiguous free pages and may end up using system VM or needing to expand the swap file a lot.
When you update a key, the space is already allocated so you don't see any performance issues.
Since I was doing lot of set (Key value) in NodeJs which is done asynchronously, lot of socket connections are concurrently open. The NodeJs socket write buffer might be overloaded and GC might come and fiddle with the node process.
PS: I changed redis memory configurations as Tom suggested but it was still performing the same.
精彩评论