Winsock local loop-back latency
I'm using tcp sockets to provide interprocess communication between two apps on Windows XP. I chose tcp sockets for various reasons. I'm seeing an average round-trip time of 2.8 ms. That's much slower than I was expecting. Profiling seems to show that the delay is between one app calling send and the other end's blocking recv returning.
I have too apps, a daemon and a client. They are structured like this pseudo code:
Daemon thread 1 (Listens for new connections):
while (1) {
SOCKET listener_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
bind(listener_socket, (SOCKADDR*)&server_info, sizeof(SOCKADDR));
listen(listener_socket, 1);
SOCKET client_socket = accept(listener_socket, NULL, NULL);
closesocket(listener_socket);
CreateThread(client_thread);
}
Daemon client_socket thread (listens for packets from client):
char cmdBuf[256];
int cmdBufAmountData = 0;
while (1)
{
char recvBuf[128];
int bytesTransferred = recv(m_clientSocket, recvBuf, sizeof(recvBuf), 0);
// Copy received data into our accumul开发者_运维百科ated command buffer (commands
// may be split across packet boundaries)
memcpy(cmdBuf + cmdBufAmountData, recvBuf, bytesTransferred);
cmdBufAmountData += bytesTransferred;
// See if there is one or more complete commands in cmdBuf
// (commands are separated by '\0')
while (commandExists(cmdBuf, cmdBufAmountData))
{
// do stuff with command
send(m_clientSocket, outBuf, msgLen, 0);
// Throw away the command we just processed by shuffling
// the contents of the command buffer left
for (int i = 0; i < cmdBufAmountData - cmdLen; i++)
cmdBuf[i] = cmdBuf[i + cmdLen];
cmdBufAmountData -= cmdLen;
}
}
Client thread 1:
start_timer();
send(foo);
recv(barBuf);
end_timer(); // Timer shows values from 0.7ms to 17ms. Average 2.8ms.
Any ideas why the latency is so bad? I suspected Nagel's algorithm, but littering my code with:
BOOL bOptVal = TRUE;
setsockopt(socket, IPPROTO_TCP, TCP_NODELAY, (char*)&bOptVal, sizeof(BOOL));
Doesn't help. Do I need to do this on both the client and daemon sockets (I am doing)?
I'm on a quad core machine with almost no load, no disk activity etc.
Firstly, in your server, the while loop should be around the Accept rather than the listen... You only need to listen once, so, something more like...
SOCKET listener_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
bind(listener_socket, (SOCKADDR*)&server_info, sizeof(SOCKADDR));
listen(listener_socket, 1);
while (1) {
SOCKET client_socket = accept(listener_socket, NULL, NULL);
closesocket(listener_socket);
CreateThread(client_thread);
}
Next, yes, if you want to turn off nagle you need to do it on both the accepted server socket and the connected client socket. You can do it just after you connect/accept. So, if you're only setting nagle on one socket then that may be your issue.
Given that you're using TCP I assume you're reading until you have your complete message and not assuming that one send on one side == one recv on the other. (i.e. I assume your code is abbreviated and doesn't show the normal recv loop).
How many clients? How many threads?
And you shouldn't close the listening socket until you want to exit your server.
I would have a look at named pipes rather than sockets if you don't mind being wedded to the Windows API.
精彩评论