.NET Async Sockets: any benefit of SocketAsyncEventArgs over Begin/End in this scenario?
Socket has these new async methods since .NET 3.5 for use with SocketAsyncEventArgs (e.g. Socket.SendAsync()), benefits being under the hood they use IO completion ports and avoid the need to keep allocating.
We have made a class called UdpStream with a sim开发者_如何学JAVAple interface - just StartSend and a Completed event. It allocates two SocketAsyncEventArgs, one for send and one for receiving. The StartSend simply dispatches a message using SendAsync, and is called about 10 times a second. We use the Completed event on the receive SocketAsyncEventArgs, and after each event is handled we all ReceiveAsync so that it forms a receive loop. Again, we receive roughly 10 times per second.
Our system needs to support up to 500 of these UdpStream objects. In other words our server will communicate concurrently with 500 different IP endpoints.
I notice in the MSDN SocketAsyncEventArgs examples that they allocate N x SocketAsyncEventArgs, one for each outstanding receive operation you want to handle at one time. I am not clear exactly how this relates to our scenario - it seems to me that perhaps we are not getting the benefit of SocketAsyncEventArgs because we are simply allocating one per endpoint. If we end up with 500 receive SocketAsyncEventArgs I am presuming we will get no benefit. Perhaps we still get some benefit from IO completion ports?
Does this design make correct use of SocketAsyncEventArgs when scaling to 500?
For the case where we have a single "UdpStream" in use, is there any benefit to using SocketAsyncEventArgs vs using the older Begin/End API?
it seems to me that perhaps we are not getting the benefit of SocketAsyncEventArgs because we are simply allocating one per endpoint. If we end up with 500 receive SocketAsyncEventArgs I am presuming we will get no benefit.
There is still a huge benefit.
If you use the APM pattern (Begin/End methods), each and every BeginSend
and every BeginReceive
allocate an IAsyncResult instance. This means there's a full class/object allocation occurring roughly 10,000 times per second (500*10 [send] + 500*10 [receive]). This puts a huge amount of extra overhead in the system since it's going to add a lot of GC pressure.
Switching to the new suggested method for high performance networking applications, you'd preallocate the SocketAsyncEventArgs
instances (500) and reuse them for every method call, thereby eliminating the GC pressure created during these operations.
Socket has these new async methods since .NET 3.5 for use with SocketAsyncEventArgs (e.g. Socket.SendAsync()), benefits being under the hood they use IO completion ports and avoid the need to keep allocating.
Begin/End methods do also use IO Completion ports.
For the case where we have a single "UdpStream" in use, is there any benefit to using SocketAsyncEventArgs vs using the older Begin/End API?
imho you should stick with what you know since you'll get the product up and running faster. But I would also create a strict IO handling class which takes care of the transport. It makes it easier to switch to the new model if the transport performance are proven to be a bottleneck.
精彩评论