Named pipes: Many clients. How to be prudent with thread creation? Thread Pool?
Situation:
I'm am using named pipes on Windows for IPC, using C++.
The server creates a named pipe instance via CreateNamedPipe, and waits for clients to connect via ConnectNamedPipe.
Everytime a client calls CreateFile to access the named pipe, the server creates a thread using CreateThread to service that client. After that, the server reiterates the loop, creating a pipe instance via CreateNamedPipe and listening for the next client via ConnectNamedPipe, etc ...
Problem:
Every client request trig开发者_JS百科gers a CreateThread on the server. If clients come fast and furious, there would be many calls to CreateThread.
Questions:
Q1: Is it possible to reuse already created threads to service future client requests? If this is possible, how should I do this?
Q2: Would Thread Pool help in this situation?
I wrote a named pipe server today using IOCompletion ports just to see how.
The basic logic flow was:
- I created the first named pipe via CreateNamedPipe
- I created the main Io Completion Port object using that handle: CreateIoCompletionPort
- I create a pool of worker threads - as a thumb suck, CPUs x2. Each worker thread calls GetQueuedCompletionStatus in a loop.
- Then called ConnectNamedPipe passing in an overlapped structure. When this pipe connects, one of the GetQueuedCompletionStatus calls will return.
- My main thread then joins the pool of workers by also calling GetQueuedCompletionStatus.
Thats about it really.
Each time a thread returns from GetQueuedCompletionStatus its because the associated pipe has been connected, has read data, or has been closed. Each time a pipe is connected, I immediately create a unconnected pipe to accept the next client (should probably have more than one waiting at a time) and call ReadFile on the current pipe, passing an overlapped structure - ensuring that as data arrives GetQueuedCompletionStatus will tell me about it.
There are a couple of irritating edge cases where functions return a fail code, but GetLastError() is a success. Because the function "failed" you have to handle the success immediately as no queued completion status was posted. Conversely, (and I belive Vista adds an API to "fix" this) if data is available immediately, the overlapped functions can return success, but a queued completion status is ALSO posted so be careful not to double handle data in that case.
On Windows, the most efficient way to build a concurrent server is to use an asynch model with completion ports. But yes you can use a thread pool and use blocking i/o too, as that is a simpler programming abstraction.
Vista/Windows 2008 provide a thread pool abstraction.
精彩评论