Boost ASIO iostream random delays on reading
I have a client talking to server with TCP via localhost. The server uses Boost ASIO iostream in blocking mode. It accepts the incoming connections, reads the request, sends response and closes the socket. The problem is - sometimes server have a random delay for 10-200 milliseconds on the first read via getline. I've set TCP_NODELAY flag on both server's and client's socket. What can be the reason for this delays? I know, that i should use select before reading from socket, but i expected that there shouldn't be such a great delay via localhost.
Here is the relevant part of server's code:
asio::io_service io_service;
ip::tcp::endpoint endpoint(bindAddress, 80);
ip::tcp::acceptor开发者_如何转开发 acceptor(io_service, endpoint);
for(;;)
{
ip::tcp::iostream stream;
acceptor.accept(*stream.rdbuf(), peer);
ip::tcp::no_delay no_delay(true);
stream.rdbuf()->set_option(no_delay);
string str;
getline(stream, str); // at this line i get random delays
//the main part of code
}
I have around 200 requests/second, delay happens several times per minute. netstat -m shows, that there is enough buffers.
UPDATE:
It looks like the problem of client, not server: Apache HttpClient random delays under high requests/second
Answering this question for the sake of closing it out.
Apache HttpClient random delays under high requests/second
Apache's ab(1) also has "saw tooth"-like performance because it dispatches -c connections that it monitors via select(2), then once all connections have returned, it will dispatch another -c connections. The alternate (and better) approach would be to establish a new connection and readd the file-descriptor to ab(1)'s select(2) array to make sure -c connections are always active processing.
I've seen ab(1) give some very misleading results because one connection out of a thousand hung (still not a good thing, but it skews results very negatively when using it through a load balancer).
精彩评论