开发者

Unix TCP servers and UDP Servers

Why is the design of TCP servers mostly such that whenever it accepts 开发者_如何学Ca connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?


The main difference between TCP and UDP is, as stated before, that UDP is connectionless.

A program using UDP has only one socket where it receives messages. So there's no problem if you just block and wait for a message.

If using TCP you get one socket for every client which connects. Then you can't just block and wait for ONE socket to receive something, because there are other sockets which must be processed at the same time.
So you got two options, either use nonblocking methods or use threads. Code is usually much simpler when you don't have one while loop which has to handle every client, so threading is often prefered. You can also save some CPU time if using blocking methods.


When you talk with client via TCP connection you maintain TCP session. So when new connection established you need separate process(or thread, no matter how it implemented and what OS used) and maintain conversation. But when you use UDP connection you may recieve datagram(and you will be informed about senders ip and port) but in common case you cannot respond on it.


First of all, the classic Unix server paradigm is filter based. For example, various network services can be configured in /etc/services and a program like inetd listens on all of the TCP and UDP sockets for incoming connections and datagrams. When a connection / DG arrives it forks, redirects stdin, stdout and stderr to the socket using the dup2 system call, and then execs the server process. You can take any program which reads from stdin and writes to stdout and turn it into a network service, such as grep.

According to Steven's in "Unix Network Programming", there are five kinds of server I/O models (pg. 154):

  1. blocking
  2. non-blocking
  3. multiplexing (select and poll)
  4. Signal Driven
  5. asynchronous ( POSIX aio_ functions )

In addition the servers can be either Iterative or Concurrent.

You ask why are TCP servers are typically concurrent, while UDP servers are typically iterative.

The UDP side is easier to answer. Typically UDP apps follow a simple request response model where a client sends a short request followed by a reply with each pair constituting a stand alone transaction. UDP servers are the only ones which use Signal Drive I/O, and at the very rarely.

TCP is a bit more complicated. Iterative servers can use any of the I/O models above, except #4. The fastest servers on a single processor are actually Iterative servers using non-blocking I/O. However, these are considered relatively complex to implement and that plus the Unix filter idiom where traditionally the primary reasons for use of the concurrent model with blocking I/O, whether multiprocess or multithreaded. Now, with the advent of common multicore systems, the concurrent model also has the performance advantage.


Your generalization is too general. This is a pattern you might see with a Unix-based server, where process creation is inexpensive. A .NET-based service will use a new thread from the thread pool instead of creating a new process.


Programs that can continue to do useful work while they are waiting for I/O will often be multithreaded. Programs that do lots of computation which can be neatly divided into separate sections can benefit from multithreading, if there are multiple processors. Programs that service lots of network requests can sometimes benefit by having a pool of available threads to service requests. GUI programs that also need to perform computation can benefit from multithreading, because it allows the main thread to continue to service GUI events.

Thats why we use TCP as an internet protocol.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜