开发者

My hot-deploying (toy) server: need your design inputs (homework)

from  multiprocessing import Process    
a=Process(target=worker, args=())
a.start()

I am making a multiple worker-process app (don't laugh yet) in which each worker can gracefully reload. Whenever the code is updated, new requests are served by new worker proce开发者_开发百科sses with the new code. This is such that

  1. A newly launched thread contains updated code
  2. ensure that no requests are dropped

I already made a worker that listens:

  1. serves requests when it gets aa request signal
  2. kills itself when the next signal is a control signal

I did it in zeromq. The clients connect to this server using zeromq. The clients do not interact by HTTP.


What is a good way to reload the code? Can you explain a scheme that is simple and stupid enough to be robust?


What I have in mind/ can do

Launch a thread within the main process that iterates:

  1. Signal every worker process to die
  2. Launch new worker processes

But this approach will drop (I configured it that way) requests between the death of the last old worker and the spawning of the first new worker.


And no, I am not a college student. The "homework" just means a curiosity-driven pursuit.


Reloading code in python is a notoriously difficult problem.

Here's how I would deal with it:

  • at server startup, listen on your HTTP port (but do not start accepting connections)
  • Use multiprocessing or some such to create some worker processes; this should happen after the socket starts listening so that the subprocesses inherit the socket.
  • each worker can then accept connections and service requests.
  • when the parent process learns that it should reload, it shuts down the listening socket.
  • when a worker tries to accept a closed socket, it recieves a socket.error exception, and should terminate
  • the parent process can start a new main process (as in subprocess.Popen(sys.argv)) the new process can start accepting connections immediately.
  • the old process can now wait for the child workers to finish; the children cannot accept new connections (since the listening socket is shutdown). Once all child process have finished handling in-flight requests and closed, the parent process can also terminate.


The way that I did this in python is fairly simple but depends on a middleman or message broker. Basically, I receive a message, process it and ack it. If a message is not acked, then after a timeout the broker requeues it.

With this in place, you simpy kill and restart the process. In my case the process traps SIGINT and does an orderly shutdown. However it dies, the supervisor notices that the worker has died and starts a new one, which continues processing messages from the queue.

I was inspired by Erlang's supervision tree model where everything is designed to survive the death of a process. I even made my workers send a heartbeat to the supervisor periodically (ZeroMQ PUB to supervisor SUB) so that the supervisor can kill and restart a process if it hangs for any reason.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜