Setting celery concurrency to 1 worker per queue
I am essentially using rabbitmq queues in celery as a poor man's synchronisation. Eg when certain objects are updated (and have a high cost), I round robin them to a set of 10 queues based on their object IDs. Firstly is this a common pattern or is there a better way.
Secondly, with celeryd, it seems that the concurrency level option (CELERY_CONCURRENCY) sets the number 开发者_开发知识库of workers across all the queues. This kind of defeats the purpose of using the queues for synchronization as a queue can be serviced by multiple workers, which means potential race conditions when performing different actions on the same object.
Is there a way to set the concurrency level (or worker pool options) so that we have one worker per N queues?
Thanks Sri
Why you don't simply implements a global task lock system, by using memcache or a nosql db? In this way you avoid any race condition.
Here an example http://ask.github.com/celery/cookbook/tasks.html#ensuring-a-task-is-only-executed-one-at-a-time
Related to the first part of your question, I've asked and answered a similar question here: Route to worker depending on result in Celery?
Essentially you can route directly to a worker depending on a key, which in your case is an ID. It avoids any need for a single locking point. Hopefully it's useful, even though this question is 2 years old :)
精彩评论