why select() always return 0 after the first timeout
I have a problem about select function when I worked on a Linux socket program. The select function worked fine as the man page says if the client connected the server side in the time interval configured by the server. If the timeout happened, the select function will return 0 forever. At that time, I debug the client and find the client had connected to the server. But the select function still return 0. I have search this problem but found none helpful. Could someone know why select did like that? My linux version is RHEL5.4. Thank you for your help.
The code is illustrated below.
static const int maxLog = 10000;
int main()
{
int servSock;
signal(SIGPIPE, SIG_IGN);
if((servSock = socket(AF_INET, SOCK_STREAM, 0)) < 0 )
{
printf("socket create fail\n");
exit(-1);
}
int val = 1;
if(setsockopt(servSock, SOL_SOCKET, SO_REUSEADDR, &val, sizeof(val))<0)
{
DieWithUserMessage("setsockopt error");
}
struct sockaddr_in serverAddr;
memset(&serverAddr, 0, sizeof(serverAddr));
serverAddr.sin_family = AF_INET;
serverAddr.sin_addr.s_addr = htonl(INADDR_ANY);
serverAddr开发者_Go百科.sin_port = htons(22000);
if(bind(servSock, (struct sockaddr *) &serverAddr,
sizeof(serverAddr)) < 0)
{
printf("socket bind fail\n");
exit(-1);
}
if(listen(servSock, maxLog) < 0)
{
printf("listen failed\n");
exit(-1);
}
fd_set read_set;
FD_ZERO(&read_set);
FD_SET(servSock, &read_set);
int maxfd1 = servSock + 1;
std::set<int> fd_readset;
for(;;){
struct timeval tv;
tv.tv_sec = 5;
int ret = select(maxfd1, &read_set, NULL, NULL, tv);
if(ret == 0)
continue;
if(ret < 0)
DieWithUserMessage("select error");
if(FD_ISSET(servSock, &read_set))
{
struct sockaddr_in clntAddr;
socklen_t clntAddrlen = sizeof(clntAddr);
int clntSock = accept(servSock, (struct sockaddr *) &clntAddr, &clntAddrlen);
if(clntSock < 0)
{
printf("accept failed()");
exit(-1);
}
maxfd1 = 1 + (servSock>=clntSock? servSock:clntSock);
FD_SET(clntSock, &read_set );
fd_readset.insert(clntSock);
}
}
}
The 'select()
' function is frustrating to use; you have to set up its arguments each time before you call it because it modifies them. What you are seeing is a demonstration of what happens if you don't set up the fd_set(s) each time around the loop.
The same effect seems to happen if you don't reset the timeval struct before each call to select.
You have the right answer already - re-init the fd_set
s before each call to select(2)
.
I would like to point you to a better alternative - Linux provides epoll(4)
facility. While it's not standard, it's much more convenient since you need to setup the events you wait for only once. The kernel manages the file descriptor event tables for you, so it's much more efficient. epoll
also provides edge-triggered functionality, where only a change in state on a descriptor is signaled.
For completeness - BSDs provide kqueue(2)
, Solaris has /dev/poll
.
One more thing: your code has a well known race condition between a client and the server. Take a look at Stevens UnP: Nonblocking accept
.
You have to fill your FD_SET at each iteration. The best way to do so is to maintain a collection of your FDs somewhere and put the one that you need for the select call in a temporary FD_SET.
If you need to handle a lot of clients, you might have to change the FD_SETSIZE (in /usr/include/sys/select.h
) macro.
Happy network programming :)
I've got the same trouble in my similar codes. I followed the suggestion of doing initialization each time before calling select() and it works. In codes at this case, just bringing the two lines into loop will make it work.
FD_ZERO(&read_set);
FD_SET(servSock, &read_set);
精彩评论