开发者

Celery task seems to do everything except write to the database

I am using Django with MongoEngine, django-celery and the MongoDB backend for celery. I am queuing one task. The task involves fetching a file from GridFS (through the MongoEngine FileField), manipulating it and putting it back in the database.

The task runs as I expect without queuing. When I queue it, it converts the files, but it does not write to the database.

Here's the relevant part of my settings.py.

#These are apparently defaults that I shouldn't need
BROKER_BACKEND = 'mongodb'
BROKER_HOST = "localhost"
BROKER_PORT = 27017
BROKER_USER = ""
BROKER_PASSWORD = ""
BROKER_VHOST = ""

CELERY_RESULT_BACKEND = "mongodb" CELERY_MONGODB_BACKEND_SETTINGS = {
    "host": "localhost",
    "port": 27017,
    "database": "svg",
    "taskmeta_collection": "taskmeta", }

import djcelery djcelery.setup_loader()

I'm running celery like this

 $ ./manage.py celeryd -l info

When it runs the task, celery says this

[2011-07-23 16:07:11,858: INFO/MainProcess] Got task from broker: graphics.tasks.queue_convert[dfdf98ad-0669-4027-866d-c64971bb6480]
[2011-07-23 16:07:15,196: INFO/MainProcess] Task graphics.tasks.queue_convert[dfdf98ad-0669-4027-866d-c64971bb6480] succeeded in 3.33006596565s

(No errors)

Here's the task.

@task()
def queue_convert(imageId):
    image=Image.objects.get(id=imageId)
    convert(image)

convert calls a bunch of other functions. Basically, it first reads from a FileField, manipulates that string, writes that string to a file, manipulates that file, writes the generated strings and files to other FileFields and then r开发者_运维技巧uns image.save().

The mongo logs look different depending on whether I queue the task. This is what happens in the mongo logs when I use the task queue.

Sat Jul 23 16:03:26 [initandlisten] connection accepted from 127.0.0.1:39065 #801
Sat Jul 23 16:03:26 [initandlisten] connection accepted from 127.0.0.1:39066 #802
Sat Jul 23 16:03:29 [initandlisten] connection accepted from 127.0.0.1:39068 #803

This is what happens when I call convert(image) directly instead of calling queue_convert(image.id)

Sat Jul 23 16:07:13 [conn807] end connection 127.0.0.1:43630
Sat Jul 23 16:07:13 [initandlisten] connection accepted from 127.0.0.1:43633 #808
Sat Jul 23 16:07:13 [initandlisten] connection accepted from 127.0.0.1:43634 #809
Sat Jul 23 16:07:13 [conn808] end connection 127.0.0.1:43633

Any idea as to what might be going wrong?


update:I've thought about the problem you were having a bit more, and though it sounds like you solved it for you, I'll add a couple notes in case someone has a similar problem.

Mongodb explicitly expands JSON, using 'BSON' instead, which adds a binary and file type to the list of supported types. I've only seen 'JSON' in the celery docs so I'd guess that care would be required to use mongodb with celery and dealing with the expanded set, as it sounds like you were with images.

In the docs for the latest development version of IPYTHON (11.0rc4) they discuss their distributed work system. Though the lingo sounds similar to celery, the backend may be quite different. I think celery is relatively flexible about backends, and probably allows for more security, which sounds like an issue with zeromq, which ipython requires. But on the database side, the ipython system was 'designed from the ground up around mongodb,' according to the docs, and bson is fully supported. So if you're not too concerned with other celery features (security, development base related to django, and much more, of course), you might look into it. Again, this is by no means the rigorous evaluation that celery and ipython both deserve, just a possible lead; ipython also integrates well with other scientific computing libraries, with built-in support for matplotlib, and lots of scientific computing examples, which might interest you if you're doing image processing and treating your image data as numpy arrays or whatever.

Best of luck

original answer: I agree with lazerscience - it would help to have more context here. There are so many unknowns due to the complexity of these libraries. It's probably not possible to answer with the rigor expected on this site.

That said, I think you may have run into a serialization problem. Celery requires that your objects be pickleable, or at least serializable according to whatever implementation you choose (I know they support JSON as well, though I'm enough of a novice not to be certain whether Pickle and JSON overlap entirely or not). I see your function only takes an integer parameter, which is good. But would the shift to gridfs mean you're trying to pickle an image? You could certainly manipulate images with celery but I'm not sure, especially with everything happening behind the mysterious 'convert' function, whether you may be accidentally trying to serialize something other than unicode, a dictionary, an integer, a float, and whatever other few miscellaneous objects your format would support. Maybe you'd retrieved a filepath to the image in the past and manipulated it in the file without ever retrieving or sending more than unicode, and now have the image itself?

If I'm way off base, please cut me a little slack. I'm responding because I saw your message both here and on the mongoengine user's group and figured you were stuck and not finding a more expert opinion. You might also double check to be sure you have reasonably current versions of the backend software. I had a bunch of weird celery issues at some point and found they were mainly resolved when I updated rabbitmq. Good luck!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜