开发者

Killing individual Apache processes in mod_python

We are having a problem with individual apache processes utilizing large amounts of memory, depending on the request, and开发者_JAVA技巧 never releasing it back to the main system. Since these requests can happen at any time, over time the web server is pushed into swap, rendering it unresponsive even to SSH. Worse, after the request has finished, Python fails to release the memory back into the wild, which results in a number 500mb - 1gb Apache processes lying around.

We push very few requests per second, but each request has the potential to be very heavy.

What I would like to do is have a way to kill an individual apache process child after it has finished serving a request if its resident memory exceeds a certain threshold. I have tried several ways of actually doing this inside mod_python, but it appears that any form of system exit results in the response not completing to the client.

Outside of gracefuling all the processes (which we really want to avoid) whenever this happens, is there anyway to tell Apache to arbitrarily kill off a process after it has finished serving a request? All ideas are welcome.

As an additional caveat, due to the legacy nature of the system, we can’t upgrade to a later version of Python, so we can’t utilize the improved memory performance of 2.5. Similarly, we are stuck with our current OS.

Versions: System: Red Hat Enterprise 4

Apache: 2.0.55

Python: 2.3.5


I'd say that even it is possible, it would be a tremendous hack (and instable) - you should set-up a process external to apache in this case, that would supervise running processes and kill an individual apache when it goes beyond memory/time predefined limits.

Such a script can be kept running continuously with a mainloop that is performs it's checks every few seconds, or could even be put in crontab to run every minute.

I see no reason to try to that from inside the serving processes themselves.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜