开发者

Giving Users an Option Between UDP & TCP?

After studying TCP/UDP difference all week, I just can't decide which to use. I have to send a large amount of constant sensor data, while at the same time sending important data that can't be lost. This made a perfect split for me to use both, then I read a pape开发者_如何转开发r (http://www.isoc.org/INET97/proceedings/F3/F3_1.HTM) that says using both causes packet/performance loss in the other. Is there any issue presented if I allow the user to choose which protocol to use (if I program both server side) instead of choosing myself? Are there any disadvantages to this?

The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).


I'd say go with TCP, unless you can't (because you have thousands of sensors, or the sensors very low energy budgets, or whatever). If you need reliability you'll have to roll your own reliability layer on top of UDP.

Try it out with TCP, and measure your performance. If it's OK, and you don't anticipate serious scaling issues, then just stay with TCP.


The article you link goes into detailed analysis on some corner cases. This probably does not apply in your situation. I would ignore this unless your own performance tests start showing problems. Start with the simplest setup (I'm guessing TCP for bulk data transfer and UDP for non-reliable sensor data), test, measure, find bottlenecks, re-factor.


The OP says:

... sending important data that can't be lost.

Therefore, TCP, by definition is the right answer over UDP.

Remember, the "U" in UDP stands for "unreliable"

Re:

The only other solution I came up with is to use UDP, and if there seems to be too great of loss of packets, switch to TCP (client-side).

Bad idea: things will tend to break at exactly the times that you don't expect them to. Your job, as an engineer, is to plan for the failure cases and mitigate them in advance. UDP will lose packets. If your data can't be lost, then don't use UDP.


I also would go with just TCP. UDP has its uses, and high-importance sensor data isn't really what comes to mind. If you can stand to lose plenty of sensor data, go with UDP, but I conjure that isn't what you want at all.


UDP is simpler protocol than TCP, and you can still simulate features of TCP using UDP. If you really have custom needs, UDP is easier to tweak.

However, I'd firstly just use both UDP and TCP, check their behavior in a real environment, and only then decide to reimplement TCP in terms of UDP in the exact way you need. Given proper abstraction, this should not be much work.

Maybe it would be enough for you to throttle your TCP usage not to fill up the bandwidth?


If you can't lose data, and you use UDP, you are reinventing TCP, at least a significant fraction of it. Whatever you gain in performance you are prone to lose in protocol design errors, as it is hard to design a protocol.


Constant sensor data: UDP. Important data that can't be lost: TCP.


You can implement your own mechanism to confirm the delivery of UDP packets that can't be lost.


I would say go with TCP. Also, if you're managing a lot of packet loss, the protocol of choice is your least concern. If you need important data, TCP. If the data is not important and can be supplemented later, UDP. If the data is mission-critical, TCP. UDP will be faster, but leave you with errors left and right from corrupt or non-existent packets. In the end, you'd be reinventing TCP to fix the problems.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜