开发者

Memory leak when run Drupal on linux apache, but same application wont leak mem on windows server

Current my website is using drupal 6 , the apache http is using prefork mpm. When i test my web application, the memory wont release at all , it just adds up the memory usage. However on windows , it use mpm_winnt.c ,开发者_如何转开发 it works great,without memory leak.

Does it work if i change it to using worker.c on my linux server?


Worker mode is not available with mod_php for Apache. because if PHP5 is known to be multithread-enabled it's really not the case for all PHP librairies and extensions (for example a locale call will enforce locale settings for all PHP threads in the apache proces).

So, do not use the worker model. Except maybe if you use PHP outside of Apache (with php-fpm). And on Windows you may experience the same thread corruption problems (but officially, PHP distributions on Windows are considered thread-safe, so as long as you do not add a self-compiled PHP external component...).

When you say :

the memory wont release at all

I'm not sure you fully understand what's happening. Apache will fork a big number of child processes. If you use your Drupal6 application on these childrens and allow a big memory_limit for PHP you can be quite sure Drupal will use this memory limit, so If you said 128M your apache child process, running PHP will take this RAM (if Drupal ask for that, but Drupal with views is a good RAM eater, for sure). When the requests ends the Apache subprocess won't release the RAM, as it may need the same amount for the next query. So if you allow 100 MaxClients to Apache and 128M of memory limit you may end up with 128M*100=12,5Go of RAM. Now on Linux the fact that available RAM is used is not a problem, you could think of it as a good thing. You have RAM available, why not using it? Your problem is maybe that you do not have this amount of RAM (12.5G here, only for Apache)

You can enforce a die of an apache subprocess with the MaxRequestsPerChild, let's say with a value of 100, then after 100 requests handled by an apache child process it will be killed and re-created. But, if all your requests needs 128M of RAM you'll get the same problem soon.

  • set low values for MaxClients (less than available RAM/average RAM of one Process),
  • push the MySQL server on another server, big RAM eater as well
  • try to find a low limit for memory_limit (this can be quite hard with Drupal, but check for all the Drupal profiling modules, and check all the visibility settings of your blocks you'll get nice hints)
  • if your project is quite big buy several apache servers and build a solution with http load balancing.
  • use php in fastcgi mode (like php-fpm), so that at least all non-php pages will be used with a worker apache server (but you'll get same problems of RAM usage inside your fastcgi php server)
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜