开发者

Packet timing problem

I have a client that every 8 seconds will send a packet to a server. If the server detects the packets are sent too fast it will disconnect the client. In the client I call Sleep(8000); before sending the packet. On the server side I use GetTickCount(); to calculate the time between the packets. I expected this to work without any problems but I keep getting disconnected.

I used WireShark to check the packet times and this is what I got: Packet# Time 17 8.656064 72 16.957240 115 24.764741

24.764741 - 16.957240 = 7.807501 < 8 is the reason why I got disconnected. I don't understand this because in the client I call Sleep(8000); so it should send packets every 8 seconds or more.

The 2nd packet is late 0.3 s开发者_开发百科econd and the 3rd one is early about 0.2. Is there a way to send these packets in time?


The answers advising you to not rely too much on clock accuracy and to be aware of sources of latency are correct.

However, the fact that you are out by 200ms makes me speculate that you are using TCP and you have not turned off the Nagle algorithm. For a time sensitive protocol you should set your socket to have TCP_NODELAY on.

All the rest of the TCP latency warnings apply; you don't really know when things are going to happen and you need to deal with that in your protocol.


The problem is actually an age old one; TickCount varies, even between multiple CPUs in the same computer:

http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/22c68353-1dbb-4718-a8d2-0679fdc0c298/

My suggestion is set your Sleep to higher then 8000, say 9500, and keep the same mechanic for tickCount, therefore Sleep should always be higher.

Another link to read is here:

http://en.wikipedia.org/wiki/Latency_(engineering)#Computer_hardware_and_operating_system_latency

With specific reference to the paragraph on Microsoft Windows.

Update:

I'm not sure why this is getting downvoted but allow me to clarify the problem, as I see it, here.

TickCount cannot be relied upon as a specific measure of time, other then when it's localised to one processing unit. The first link to MSDN provides citation as to why.

Secondly, Windows itself may have inaccuracies in its timer logic.

Lastly, in and of itself, your Crystal based timers may differ as it's known that atmospheric conditions affect piezoelectric oscillation.

http://en.wikipedia.org/wiki/Crystal_oscillator#Temperature_effects

In summary, TickCount isn't reliable and Getting absolute precision in sending packets over a network is a hard thing to achieve over consumer grade networks.

My solution, to ensure that the Sleep timer waits longer then the Tick timer isn't elegant, but enough of a 'fudge' to solve the problem.

You can't guarantee that a packet will arrive within X seconds, but you can make pretty sure that it won't be sent before a certain duration has passed.


To me it seems odd that you would expect this to work. For example, your packets could conceivably be buffered and sent over the wire together despite the delay. Depending on network conditions they could arrive out of order or with arbitrary delays. What does it matter how much time is between them when they are processed server-side? The timing on the server doesn't say anything about the timing on the client.


You can't expect individual packets sent on a network to be timed that precisely. Instead of setting your disconnection condition based on the time between the last two packets, average it over a longer period - for example, disconnect if more than 5 packets have been recieved in the last 40 seconds.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜