开发者

Problem with recv() on a tcp connection

I am simulating TCP communication on windows in C. I have sender and a receiver communicating.

The sender sends packets of specific 开发者_运维百科size to the receiver. The receiver gets them and sends an ACK for each packet it received back to the sender. If the sender didn't get a specific packet (they are numbered in a header inside the packet) it sends the packet again to the receiver. Here is the getPacket function on the receiver side:

//get the next packet from the socket. set the packetSize to -1
//if it's the first packet.
//return: total bytes read
// return: 0 if socket has shutdown on sender side, -1 error, else number of bytes received
int getPakcet(char* chunkBuff, int packetSize, SOCKET AcceptSocket)
{
    int totalChunkLen = 0;
    int bytesRecv = -1;
    bool firstTime = false;

    if(packetSize == -1)
    {
        packetSize = MAX_PACKET_LENGTH;
        firstTime = true;
    }

    int needToGet = packetSize;

    do
    {
        char* recvBuff;
        recvBuff = (char*)calloc(needToGet, sizeof(char));

        if(recvBuff == NULL)
        {
            fprintf(stderr, "Memory allocation problem\n");
            return -1;
        }

        bytesRecv = recv(AcceptSocket, recvBuff, needToGet, 0);

        if(bytesRecv == SOCKET_ERROR)
        {
            fprintf(stderr, "recv() error %ld.\n", WSAGetLastError());
            totalChunkLen = -1;
            return -1;
        }

        if(bytesRecv == 0)
        {
            fprintf(stderr, "recv(): socket has shutdown on sender side");
            return 0;
        }
        else if(bytesRecv > 0)
        {
            memcpy(chunkBuff + totalChunkLen, recvBuff, bytesRecv);
            totalChunkLen += bytesRecv;
        }

        needToGet -= bytesRecv;
    }
    while((totalChunkLen < packetSize) && (!firstTime));

    return totalChunkLen;
}

I use firstTime because for the first time the receiver doesn't know the normal package size that the sender is going to send to it, so I use a MAX_PACKET_LENGTH to get a package and then set the normal package size to the number of bytes I have received.

My problem is the last package. It's size is less than the package size. So lets say last package size is 2 and the normal package size is 4. So recv() gets two bytes, continues to the while condition, then totalChunkLen < packetSize because 2<4 so it iterates the loop again and the gets stuck in recv() because it's blocking because the sender has nothing to send.

On the sender side I can't close the connection because I didn't get ACK back, so it's kind of a deadlock. The receiver is stuck because it's waiting for more packages but sender has nothing to send.

I don't want to use a timeout for recv() or to insert a special character to the package header to mark that it is the last one.

What can I do?


You are using TCP to communicate between your receiver and transmitter and TCP is a stream-oriented protocol. That is you put a stream of bytes in one end and you get the stream out on the other end, in order and with no loss. There is no guarantee that each send() will match a recv() on the other end as the data may be broken up for various reasons.

So if you do the following with a TCP connection:

char buffer[] = "1234567890";
send(socket, buffer, 10, 0);

And then on the receiver:

char buffer[10];
int bytes = recv(socket, buffer, 10, 0);

bytes can be anywhere between 0 and 10 when recv() returns.

TCP runs over IP which is a datagram oriented protocol. This is why the TCP implementation can assume that when it sends a datagram it will receive the entire datagram on the other end (or possibly not, or receive it out-of-order). If you want to simulate that you have at least two options:

  1. Add framing to your TCP messages so you can extract packets from it. This involves adding things like the size of the packet to a header that you send into the stream. It would be kind of meaningless to use this for simulating TCP as all your packets would always arrive, always in order and already using the underlying TCP flow control/congestion avoidance mechanisms.
  2. Use a datagram protocol such as UDP. This would be closer to the IP layer that TCP runs over.

You should probably go with option 2 but if you want to go the framing route over TCP you can e.g. (rough quick code follows):

// We do this to communicate with machines having different byte ordering
u_long packet_size = htonl(10); // 10 bytes packet
send(socket, &packet_size, 4, 0); // First send the frame size
send(socket, buffer, 10, 0); // Then the frame

Receiving end:

u_long packet_size; // Hold the size of received packet
int bytes_to_read = 4; // We send 4 bytes on the wire for size and expect 4
int nresult; // hold result of recv()
char *psize = &packet_size; // Point to first byte of size
while( bytes_to_read ) // Keep reading until we have all the bytes for the size
{
  nresult = recv(socket, psize, bytes_to_read, 0);
  if(nresult==0) deal with connection closed.
  bytes_to_read -= nresult;
  psize += nresult;
}
packet_size = ntohl(packet_size);
// Now we know the packet size we can proceed and read it similar to above


The concept to keep in mind with low-level socket programming is that you are exchanging a bunch of bytes with no structure imposed by the transport. It is up to you to implement a protocol that does message delineation, either by putting the total length of what you consider a "message" at the start, by using a delimiter byte or sequence which you check in the received buffer, or by closing the connecting at the end (the latter looks easiest but is not the best solution, as you will want to reuse the connection in a real-world program as setting it up is expensive).

If this looks to complicated (and it is indeed not always easy), you need to look for a library that encapsulates this work for you, for example allowing you to send and receive an object which will be serialized, delineated and deserialized by the library code. But the work needs to be done and it will not be the transport layer doing it for you.

One small remark about the code shown: Your creating a memory leak with your multiple receive buffer allocations...


Your receiver needs to be told by the sender that it has finished. This can be done by either first sending the size of the data the receiver can expect, always send the same amount of data, or send a sentinel value to indicate there will be no more bytes following. The sender could also close the connection when it is finished sending in which case recv will return 0 when there is nothing left to be read and it detects the connection has been closed.


You can specify the amount of data in each packet at the beginning (e.g. the first 2 bytes can specify packet size), or pad the last packet so it's the same size as the others.

Edit: If you really want to 'simulate' the TCP then you should probably be using recvfrom() and sendto(), and then you receive the data in whole packets of varying sizes, and you won't have this problem.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜