开发者

Tornado process data in request handler after return

In a tornado request handler if I hav开发者_如何学Ce to call function foo() which doesn't affect what's returned to the user, it makes sense to return result to the user first and then call foo(). Is it possible to do this easily in tornado (or with some third-party package)?


It's extremely easy:

class Handler(tornado.web.RequestHandler):
    def get(self):
        self.write('response')
        self.finish() # Connection is now closed
        foo()


ioloop.add_callback, Tornado will execute the callback in the next IOLoop iteration.


bad advice warning: you can use multiprocessing.

http://docs.python.org/library/multiprocessing.html

be careful that you close all of your database connections (in the spawned code) and do whatever else tornado might do when it normally completes a request without a subprocess. The other answers sound better. But, you can do this. Don't do this.


No it's not "easy" out-of-the-box. What you're referring to is "fire and forget". Even if you use a thread pool to farm out the request, that thread pool will belong to the main python process that belongs to Tornado.

The best approach is a message queue. Something like Carrot. That way, suppose you have a page where users can execute to start generating a HUGE report, you can start it in a message queue and then finish the Tornado request and with some AJAX magic and other tricks (outside the scope of Tornado) you can sit back and wait till the message queue has finished it's job (which could technically be happening on a distributed server in a different physical location).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜