开发者

WCF client causes server to hang until connection fault

The below text is an effort to expand and add color to this question:

How do I prevent a misbehaving client from taking down the entire service?

I have essentially this scenario: a WCF service is up and running with a client callback having a straight forward, simple oneway communication, not very different from this one:

public interface IMyClientContract
{
  [OperationContract(IsOneWay = true)]
  void SomethingChanged(simpleObject myObj);
}

I'm calling this method potentially thousands of times a second from the service to what will eventually be about 50 concurrently connected clients, with as low latency as possible (<15 ms would be nice). This works fine until I set a break point on one of the client apps connected to the server and then everything hangs after maybe 2-5 seconds the service hangs and none of the other clients receive any data for about 30 seconds or so until the service registers a connection fault event and disconnects the offending client. After this all the other clients continue on their merry way receiving messages.

I've done research on serviceThrottling, concurrency tweaking, setting threadpool minimum threads, WCF secret sauces and the whole 9 yards, but at the end of the day this article MSDN - WCF essentials, One-Way Calls, Callbacks and Events describes exactly the issue I'm having without really making a recommendation.

The third solution that allows the service to safely call back to the client is to have the callback contract operations configured as one-way operations. Doing so enables the service to call back even when concurrency is set to single-threaded, because there will not be any reply message to contend for the lock.

but earlier in the article it describes the issue I'm seeing, only from a client perspective

When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call

I can only assume that the reverse is true, the number of queued messages to the client has exceeded the queue capacity and the threadpool is now filled with threads attempting to call this client that are now all blocked.

What is the right way to handle this? Should I research a way to check how many messages are queued at the service communication layer per client and abort their connections after a certain limit is reached?

It almost seems tha开发者_StackOverflow中文版t if the WCF service itself is blocking on a queue filling up then all the async / oneway / fire-and-forget strategies I could ever implement inside the service will still get blocked whenever one client's queue gets full.


Don't know much about the client callbacks, but it sounds similar to generic wcf code blocking issues. I often solve these problems by spawning a BackgroundWorker, and performing the client call in the thread. During that time, the main thread counts how long the child thread is taking. If the child has not finished in a few milliseconds, the main thread just moves on and abandons the thread (it eventually dies by itself, so no memory leak). This is basically what Mr.Graves suggests with the phrase "fire-and-forget".


Update:

I implemented a Fire-and-forget setup to call the client's callback channel and the server no longer blocks once the buffer fills to the client

MyEvent is an event with a delegate that matches one of the methods defined in the WCF client contract, when they connect I'm essentially adding the callback to the event

MyEvent += OperationContext.Current.GetCallbackChannel<IFancyClientContract>().SomethingChanged

etc... and then to send this data to all clients, I'm doing the following

//serialize using protobuff
using (var ms = new MemoryStream())
{
    ProtoBuf.Serializer.Serialize(ms, new SpecialDataTransferObject(inputData));
    byte[] data = ms.GetBuffer();
    Parallel.ForEach(MyEvent.GetInvocationList(), p => ThreadUtil.FireAndForget(p, data));
}

in the ThreadUtil class I made essentially the following change to the code defined in the fire-and-foget article

static void InvokeWrappedDelegate(Delegate d, object[] args)
{
    try
    {
        d.DynamicInvoke(args);
    }
    catch (Exception ex)
    {
        //THIS will eventually throw once the client's WCF callback channel has filled up and timed out, and it will throw once for every single time you ever tried sending them a payload, so do some smarter logging here!!
        Console.WriteLine("Error calling client, attempting to disconnect.");
        try
        {
            MyService.SingletonServiceController.TerminateClientChannelByHashcode(d.Target.GetHashCode());//this is an IContextChannel object, kept in a dictionary of active connections, cross referenced by hashcode just for this exact occasion
        }
        catch (Exception ex2)
        {
            Console.WriteLine("Attempt to disconnect client failed: " + ex2.ToString());
        }
    }
}

I don't have any good ideas how to go and kill all the pending packets the server is still waiting to see if they'll get delivered on. Once I get the first exception I should in theory be able to go and terminate all the other requests in some queue somewhere, but this setup is functional and meets the objectives.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜