开发者

Can I optimise this concurrency better?

I've recently begun my first multi-threading code, and I'd appreciate some comments.

It delivers video samples from a buffer that is filled in the background by a stream parser (outside the scope of this question). If the buffer is empty, it needs to wait until the buffer level becomes acceptable and then continue.

Code is for Silverlight 4, some error-checking removed:

// Exte开发者_开发百科rnal class requests samples - can happen multiple times concurrently
protected override void GetSampleAsync()
{
    Interlocked.Add(ref getVideoSampleRequestsOutstanding, 1);                
}

// Runs on a background thread
void DoVideoPumping()
{
    do
    {
        if (getVideoSampleRequestsOutstanding > 0)
        {
            PumpNextVideoSample();

            // Decrement the counter
            Interlocked.Add(ref getVideoSampleRequestsOutstanding, -1);
        }
        else Thread.Sleep(0);  

    } while (!this.StopAllBackgroundThreads);
}       

void PumpNextVideoSample()
{
    // If the video sample buffer is empty, tell stream parser to give us more samples 
    bool MyVidBufferIsEmpty = false; bool hlsClientIsExhausted = false;
    ParseMoreSamplesIfMyVideoBufferIsLow(ref MyVidBufferIsEmpty, ref parserAtEndOfStream);

    if (parserAtEndOfStream)  // No more data, start running down buffers
        this.RunningDownVideoBuffer = true;
    else if (MyVidBufferIsEmpty)  
    {
        // Buffer is empty, wait for samples
        WaitingOnEmptyVideoBuffer = true;
        WaitOnEmptyVideoBuffer.WaitOne();
    }

    // Buffer is OK
    nextSample = DeQueueVideoSample(); // thread-safe, returns NULL if a problem

    // Send the sample to the external renderer
    ReportGetSampleCompleted(nextSample);

}

The code seems to work well. However, I'm told that using Thread.Wait(...) is 'evil': when no samples are being requested, my code loops unnecessarily, eating up CPU time.

Can my code be further optimised? Since my class is designed for an environment where samples WILL be requested, does the potential 'pointless loop' scenario outweigh the simplicity of its current design?

Comments much appreciated.


This looks like the classic producer/consumer pattern. The normal way to solve this is with what is known as a blocking queue.

Version 4.0 of .net introduced a set of efficient, well-designed, concurrent collection classes for this very type of problem. I think BlockingCollection<T> will serve your present needs.

If you don't have access to .net 4.0 then there are many websites containing implementations of blocking queues. Personally my standard reference is Joe Duffy's book, Concurrent Programming on Windows. A good start would be Marc Gravell's blocking queue presented here in Stack Overflow.

The first advantage of using a blocking queue is that you stop using busy wait loops, hacky calls to Sleep() etc. Using a blocking queue to avoid this sort of code is always a good idea.

However, I perceive a more important benefit to using a blocking queue. At the moment your code to produce work items, consume them, and handle the queue is all intermingled. If you use a blocking queue correctly then you will end up with much better factored code which keeps separate various components of the algorithm: queue, producer and consumer.


You have one main problem: Thread.Sleep()

It has a granularity of ~20ms, that is kind of crude for video. In addition Sleep(0) has issues of possible starvation of lower-priority threads [].

The better approach is waiting on a Waithandle, preferably built into a Queue.


Blocking queue is a good and simple example of a blocking queue.
The main key is that the threads need to be coordinated with signals and not by checking the value of a counter or the state of a data structure. Any checking takes ressources (CPU) and thus you need signals (Monitor.Wait and Monitor.Pulse).


You could use an AutoResetEvent rather than a manual thread.sleep. It's fairly simple to do so:

AutoResetEvent e;
void RequestSample()
{
    Interlocked.Increment(ref requestsOutstanding);
    e.Set(); //also set this when StopAllBackgroundThreads=true!
}

void Pump()
{
    while (!this.StopAllBackgroundThreads) {
        e.WaitOne();
        int leftOver = Interlocked.Decrement(ref requestsOutstanding);
        while(leftOver >= 0) {
            PumpNextVideoSample();
            leftOver = Interlocked.Decrement(ref requestsOutstanding);
        }
        Interlocked.Increment(ref requestsOutstanding);
    }
}

Note that it's probably even more attractive to implement a semaphore. Basically; synchronization overhead is liable to be almost nil anyhow in your scenario, and a simpler programming model is worth it. With a semaphore, you'd have something like this:

MySemaphore sem;
void RequestSample()
{
    sem.Release();
}

void Pump()
{
    while (true) {
        sem.Acquire();
        if(this.StopAllBackgroundThreads) break;
        PumpNextVideoSample();
    }
}

...I'd say the simplicity is worth it!

e.g. a simple implemenation of a semaphore:

public sealed class SimpleSemaphore
{
    readonly object sync = new object();
    public int val;

    public void WaitOne()
    {
        lock(sync) {
            while(true) {
                if(val > 0) {
                    val--;
                    return;
                }
                Monitor.Wait(sync);
            }
        }
    }

    public void Release()
    {
        lock(sync) {
            if(val==int.MaxValue)
                throw new Exception("Too many releases without waits.");
            val++;
            Monitor.Pulse(sync);
        }
    }
}

On one trivial benchmark this trivial implementation needs ~1.7 seconds where Semaphore needs 7.5 and SemaphoreSlim needs 1.1; suprisingly reasonable, in other words.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜