开发者

Why use nginx with Catalyst/Plack/Starman?

I am trying to deploy my little Catalyst web app using Plack/Starman. All the documentation seems to suggest I want to use this in combination with 开发者_Python百科nginx. What are the benefits of this? Why not use Starman straight up on port 80?


It doesn't have to be nginx in particular, but you want some kind of frontend server proxying to your application server for a few reasons:

  1. So that you can run the Catalyst server on a high port, as an ordinary user, while running the frontend server on port 80.

  2. To serve static files (ordinary resources like images, JS, and CSS, as well as any sort of downloads you might want to use X-Sendfile or X-Accel-Redirect with) without tying up a perl process for the duration of the download.

  3. It makes things easier if you want to move on to a more complicated config involving e.g. Edge Side Includes, or having the webserver serve directly from memcached or mogilefs (both things that nginx can do), or a load-balancing / HA config.


I asked this question on #plack and got the following response from @nothingmuch (I added formatting):

With nginx you can set up loadbalancing/failover type stuff. If the site is small/simple it might be overkill.

I don't know of any disadvantages Starman might have. Perhaps if you have many hits on static files nginx would use less cpu/memory to handle them, but it's unlikely to be significant in a typical web app. Big downloads might tie up Starman workers for static file downloads though. (Perhaps not, with sendfile.) That's about all I can think of.

...A failover setup can be nice if you want to do upgrades with no downtime. ("Fail" the old version.)


Another reason is that a lightweight frontend server (even Apache is OK) consumes much less memory per connection than a typical Starman process (a couple of MB vs. tens or more than 100 MB). Since a connection is open for some time, especially if you want to use keep-alive connections, you can support a large number of simultaneous connections with much less RAM. Only make sure that the buffer size of the proxying frontend server is large enough to load a typical HTTP response immediately from the backend. Then the backend is free to process the next request.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜