开发者

Live (long-polling) connection to Django, via Nginx (or apache) – reducing the number of queries

I have table something like this:

class last10msg(models.Model):
    user = models.ForeignKey(User)
    date = models.DateField()
    msg = models.CharField(max_length=254) 

I want create long polling that update a table at client side with last 10 rows in table from db.

at first it was like this:

user send ajax request for last 10 rows of table.server response and and js code update table with change or same data.then user send another request.

then I think check if data is new at server (compare latest date in query with latest date in table that user sent via ajax request)send response else idle 1 min and then check again if new send response else idle 1 mi... .then user send another request.

at last :

I think can send all request to global running function and this function get all rows from last check and if are new (compare date with sent date from client and request.user with user field)return response to related view and view response without close connection(maybe replace last date in 开发者_如何学运维query with user sent date for check in compare in next query,without wait for client send last date in his table that are abviously same) and be in list of global function for next check.while user logged out or close browser.

but I really don't know how to implement this.

EDIT 1:

I found this HTTP_Push_Module but there is no example for Django and how implement global function for my purpose

EDIT 2:

I think it is better to make small part of this question.

question 1:

How to create something like this :link?

but use Django instead of php.

github.com/slact/nginx_http_push_module/blob/master/tests/test.py

and at above is an small example with pure python.(i can't understand it)

question 2: how to create global function that get list of logged in user and return last updated row related to each logged in user with long polling.


Huh -- Ok, look: it was a little hard to follow your response (so if I'm missing something, you might want to clarify yr language) but: I think I get the sense of what you're doing.

If you're always dealing with the last 10 messages (or really, the last N messages where in this case N = 10) you might find Redis to be a good fit. I set up long-polling to provide the status of a queue worker to an interface widget, via WebSockets -- my app was a Django app (running on gunicorn fronted by nginx) but I added a tiny little tornado class for this worker thread and its WebSocket status interface with no problem.

The status function had to compare the last N values; since I was using Redis data structures for the queue, it was easier to do the comparison in the client WebSocket consumer -- I had that luxury, as the behavior that hinged on the comparison was only a UI state, and not an update to data or app logic, which I assume you're worried about something like that.

What I am getting at is that the model structure you provided will work for the task, but it isn't particularly well-suited. If Redis and Tornado sound too exotic to you, consider adding fields to group your records into per-session sets... maybe with indices derived from your session identifiers or some such.

Let me know if that is helpful.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜