开发者

nginx, fastcgi and open sockets

I'm experimenting using fastcgi on nginx, but I've run into some problems. Nginx doesn't reuse connections, it gives 0 in BeginRequest flags, so the application should close the connection after the request has finished.

I have the following code for closing:

socket.shutdown(SocketShutdown.BOTH);
socket.close();

The problem is that the connections are not actually closed.. They linger on as TIME_WAIT, a开发者_开发问答nd nginx (or something) wont't keep opening new connections. My guess is I'm doing something wrong when closing the sockets, but I don't know what.. On a related note - how can I get nginx to keep connections open?

This is using nginx 1.0.6 and D 2.055

EDIT: Haven't gotten any closer, but I also checked the linger option, and it's off:

linger l;
socket.getOption(SocketOptionLevel.SOCKET, SocketOption.LINGER, l);
assert(l.on == 0); // off

getOption returns 4 though.. No idea what that means. The return value is undocumented.

EDIT: I've also tried using TCP_NODELAY on the last message sent, but this didn't have any effect either:

socket.setOption(SocketOptionLevel.SOCKET, SocketOption.TCP_NODELAY, 1);

EDIT: nginx 1.1.4 supportes keep alive connections. This doesn't work as expected though.. Is correctly report that the server is responsible for connection lifetime management, but it still creates a new socket for each request.


NGINX proxy keepalive

Regarding nginx (v1.1) keepalive for fastcgi. The proper way to configure it is as follows:

upstream fcgi_backend {
  server localhost:9000;
  keepalive 32;
}

server {
  ...
  location ~ \.php$ {
    fastcgi_keep_conn on;
    fastcgi_pass fcgi_backend;
    ...
  }
}

TIME_WAIT

TCP TIME_WAIT connection state has nothing to do with lingers, tcp_no_delays, timeouts, and so on. It is completely managed by the OS kernel and can only be influenced by system-wide configuration options. Generally it is unavoidable. It is just the way TCP protocol works. Read about it here and here.

The most radical way to avoid TIME_WAIT is to reset (send RST packet) TCP connection on close by setting linger=ON and linger_timeout=0. But doing it this way is not recommended for normal operation as you might loose unsent data. Only reset socket under error conditions (timeouts, etc.).

What I would try is the following. After you send all your data call socket.shutdown(WRITE) (this will send FIN packet to the other party) and do not close the socket yet. Then keep on reading from the socket until you receive indication that the connection is closed by the other end (in C that is typically indicated by 0-length read()). After receiving this indication, close the socket. Read more about it here.

TCP_NODELAY & TCP_CORK

If you are developing any sort of network server you must study these options as they do affect performance. Without using these you might experience ~20ms delays (Nagle delay) on every packet sent over. Although these delays look small they can adversely affect your requests-per-second statistics. A good reading about it is here.

Another good reference to read about sockets is here.

About FastCGI

I would agree with other commenters that using FastCGI protocol to your backend might not be very good idea. If you care about performance then you should implement your own nginx module (or, if that seems too difficult, a module for some other server such as NXWEB). Otherwise use HTTP. It is easier to implement and is far more versatile than FastCGI. And I would not say HTTP is much slower than FastCGI.


Hello simendsjo,

The following suggestions might be completely off target.

I use NginX myself for development purposes; however, I’m absolutely not an expert on NginX.

NginX Workers

Nevertheless, your problem brought back to memory something about workers and worker processes in NginX.

Among other things, the workers and worker processes in NginX, are used to decrease latency when workers blockend on disk I/O and to limit number of connections per process when select()/poll() is used.

You can find more info here.

NginX Fcgiwrap and socket

Another pointer might be the following code, although this example is specific for Debian.

#!/usr/bin/perl

use strict;
use warnings FATAL => qw( all );

use IO::Socket::UNIX;

my $bin_path = '/usr/local/bin/fcgiwrap';
my $socket_path = $ARGV[0] || '/tmp/cgi.sock';
my $num_children = $ARGV[1] || 1;

close STDIN;

unlink $socket_path;
my $socket = IO::Socket::UNIX->new(
    Local => $socket_path,
    Listen => 100,
);

die "Cannot create socket at $socket_path: $!\n" unless $socket;

for (1 .. $num_children) {
    my $pid = fork;
    die "Cannot fork: $!" unless defined $pid;
    next if $pid;

    exec $bin_path;
    die "Failed to exec $bin_path: $!\n";
}

You can find more information about this solution here.


Set the socket timeout as low as it can go and then close the socket. If you try to write anything to the socket after that what happens? It is possible to push unescaped binary to signal a close connection through, forcing it to end. That's how IE became known as Internet Exploder

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜