开发者

What happened to the TCP Nagle flush?

According to this Socket FAQ article, Nagle's algorithm is one of many algorithms that can cause a bunch of data to sit in the TCP buffer and not hit the wire. The delay from the Nagle algorithm can be up to 200ms.

For some reason, Nagle's algorithm can be turned off completely, but not flushed just once. This is really puzzling to me. Why is the开发者_如何学运维re no way to say that "just this one time, don't wait for any more data. Just act as if Nagle's 200ms are up."

Wouldn't that make perfect sense, and strike a good balance between no Nagle at all, Nagle all the time, and implementing one's own protocol from scratch?


Good question. I guess nobody ever really needed it or they got around it. If I remember correctly, enabling TCP_NODELAY pushes the data immediately. Then you can just disable it.

Of course, this comes at the high cost of two system calls for a "flush". What you could do: send(2), on Unix implementations has a flags argument. You could implement your own flag, something like: MSG_JUSTPUSHIT (okay, maybe another name) and consider it in tcp_output.


In performance-sensitive applications where the delays introduced by Nagle's algorithm are an issue, it's often easier to just disable Nagle's algorithm entirely and emulate its batching in software by using scatter/gather IO (e.g, writev(), or by implementing buffering in software where needed). As an added bonus, doing this cuts out some system call overhead.

Alternatively, you can open up two separate sockets and disable Nagling on one of them. Just keep in mind that data sent on one socket won't necessarily be synced up with the other one.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜