开发者

Disable TCP Delayed ACKs

I have an application that receives relatively sparse traffic over TCP with no application-level responses. I believe the TCP stack is sending delayed ACKs (based on glancing at a network packet capture). What is the recommended way to disable delayed-ACK in the network stack for a single socket? I've looked at TCP_QUICKACK, but it see开发者_运维知识库ms that the stack will change it under my feet anyways.

This is running on a Linux 2.6 kernel, and I am not worried about portability.


You could setsockopt(sockfd, IPPROTO_TCP, TCP_QUICKACK, (int[]){1}, sizeof(int)) after every recv you perform. It appears that TCP_QUICKACK is only reset when there is data being sent or received; if you're not sending any data, then it will only get reset when you receive data, in which case you can simply set it again.

You can check this in the 14th field of /proc/net/tcp; if it is not 1, ACKs should be sent immediately... if I'm reading the TCP code correctly. (I'm not an expert at this either.)


I believe using the setsockopt() function you can use the TCP_NODELAY which will disable the Nagle algorithm.

Edit Found a link: http://www.ibm.com/developerworks/linux/library/l-hisock.html

Edit 2 Tom is correct. Nagle does not affect Delayed ACKs.


The accepted answer has a bug, it should be

flags = 0;
flglen = sizeof(flags);
setsockopt(sfd, SOL_TCP, TCP_QUICKACK, &flags, flglen)

(SOL_TCP, not IPPROTO_TCP)

Post here in case someone need it.

As @damolp suggested, it is not a bug. Keep it here for record.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜