开发者

What effects can inconsistent latency have on TCP applications?

I am testing a GNU Radio program which can tunnel TCP traffic over a wireless link. We are having some strange results in testing, and in looking for a culprit I was curious about inconsistent latency.

How can inconsistent latency affect TCP applications? By inconsistent I mean widely different RTT for ACKs on a connection. For awhile ACks seem to be coming at a normal rate, then they disappear and we have retransmissions followed by the 'delayed' ACK.

For instance, say the first several ACK's received have a similar RTT. What would happen when the next ACK isn't receieved in twice the RTT of the previous ACKs? Whatever the issue is I see lots of retransmissions after a long wait for an ACK.

Now, more specifically, how can RTTs for ACKs which bounce between fast and slow affect a TCP connection?

Having said that, i开发者_如何转开发s there any way to tune the IP stack to handle this environment better?


TCP maintains a smoothed RTT (SRTT) to tell it how fast the intervening network is, i.e. how fast it can transmit. If the SRTT goes up TCP will slow down. If SRTT goes down TCP will speed up. If the actual RTT goes up and down violently, TCP may not react quickly enough, due to the smoothing, and transmit too fast, which would cause packet loss, which in turn causes retransmission, which wastes the bandwidth used by the lost packets. RTT smoothing is done via exponential decay with a gain of I think 0.2, so the old SRTT value has four times the weight of the current RTT when computing the new SRTT value.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜