I want to have a task that will execute every 5 minutes, but it will wait for last execution to finish and then start to count this 5 minutes. (This way I can also be sure that there is only one task
What is the right approach when writing celery tasks that communicate with service that have a rate limits and sometimes is missing (not responding) for a long time of period?
Is there a way to determine if any task is lost and retry it? I think that the reason for lost can be dispatcher bug or worker thread crash.
I\'m using Celery (2.2.4) with Redis (v.2.2.2) as my message broker. Any idea what would caus开发者_如何学编程e SOME (most) messages to randomly and inconsistently get lost?The only reason that seem
I\'ve launched a lot of tasks, but some of then hasn\'t finished (763 tasks), are in PENDING state, but the sy开发者_运维问答stem isn\'t processing anything...
I have a task like this: @task def test(): time.sleep(10) test.update_state(state=\"PROGRESS\") time.sleep(10)
I am using RabbitMQ with Django through Celery. I am using the most basic setup: # RabbitMQ connection settings
I\'m running celeryd as a daemon, but I sometimes have trouble stopping it gracefully. When I send the TERM signal and there are items in th开发者_运维知识库e queue (in this case service celeryd stop)
I am designing a distributed master-worker system which, from 10,000 feet, consists of: Web-based UI a master component, responsible for generating jobs according to a configurable set of algorithms
I\'m having problems retrying tasks, here is what a test task looks like from celery.decorators import task