开发者

File decriptors and socket connections

I am trying to understand how are file descriptors related to sockets. As per my understanding, you listen on a particular file descriptor, once a connection comes in , you accept it , which returns you another file descriptor ( 2 in all ) and you use this 2nd descriptor to send/recv data.

The strange behaviour i am observing is that after accept , i have 3 file descriptors instead of two.... and i am not sure why is this the case....

I am either using lsof or /proc/pid to observe the increase in number of fd's.

ps : these are af_unix sockets.

EDIT : CODE

Here is the code to create the scoket.

   int s, s2,  len;
    socklen_t t;
    struct sockaddr_un local, remote;

    if ((s = socket(AF_UNIX, SOCK_STREAM, 0)) == -1)
    {
            syslog(LOG_ERR,"Failed to create a socket");
            exit(1);
    }
    int flags =  fcntl(s, F_GETFD);
    if (flags == -1)
    {
            syslog(LOG_ERR,"Failed to get socket flags");
            exit(1);
    }

    flags |= FD_CLOEXEC;

    if (fcntl(s, F_SETFD, flags) == -1)
    {
            syslog(LOG_ERR,"Failed to set socket flags");
            exit(1);
    }

    local.sun_family = AF_UNIX;
    strcpy(local.sun_path, SOCK_PATH.c_str());

    unlink(local.sun_path);
    len = strlen(local.sun_path) + sizeof(local.sun_family);

    if (bind(s, (struct sockaddr *)&local, len) == -1)
    {
            syslog(LOG_ERR,"Failed to bind socket");
            exit(1);
    }


    if (listen(s, 5) == -1)
    {
            syslog(LOG_ERR,"Failed to listen at socket");
            exit(1);
    }

Code where connection is accepted

    while (1)
    {
            stat =0;
            execReturn=0;
            t = len;
            read_fds = master;
            if (select(fdmax+1, &read_fds, NULL, NULL, &tv) != -1)
            {
                    if(FD_ISSET(s,&read_fds))
                    {
                            //Accept new connection
                            //fork child -> fork grand child
                            //child will return value back

                            if ((s2 = accept(s, (struct sockaddr*)&remote, &t)) == -1)
                            {
                                    syslog(LOG_ERR,"Failed to acceptconnection  at socket");
                                    exit(1);开发者_运维问答
                            }

I am stepping through gdb and exactly after accept , the fd's become 3. The OS is fedora core 13.

The reason i need to validate this is i do not want my process to hold on to FD's ; since being a daemon over time it may walk the system into a corner...

This did seem odd behaviour. After closing the accepted connection i am still left with two fd's . i.e. one for listen and one ghost fd... Whats even more strange is that even if 10 connections are made , only one ghost fd remains at the end of all of them closing....

It does sound like OS specific implementation..

Cheers!


Your extra file descriptor is most likely related to syslog. Syslog has to open a socket to the syslogd to report messages. Unless you explicitly call openlog this socket is opened upon the first call to syslog, and since you aren't calling syslog until you have an error you are most likely observing syslog's side effects.


The easiest way to debug this sort of issues is to run your app under the strace(1) utility. Check what system calls are made, what the parameters and return values are, and correlate that to file descriptors used.


More code please.

But I am guessing that you are looking at the OS implementation of the socket.
It probably uses one file descriptor for reading and the other for writing (but that is a guess).

What does it matter to you what the OS is doing in /proc/pid the stuff in there is not really there for your direct usage.


You are right in that it's two. You must be confusing the third with something else.

Without more information it's hard to help.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜