开发者

does boost::wait and boost::condition have to share the same mutex object

boost::condition_variable cond;
boost::mutex mutex;

//thread #1
for(;;)
{
    D * d = nullptr;

    whil开发者_StackOverflow社区e( cb.pop(d) )  //cb is a circular buffer and manage is own mutex/lock internally
    {
        //...do something with d
    }
    boost::unique_lock<boost::_mutex> lock( mutex );
    cond.wait( mutex );
}

//thread #2
while(1)
{
    getchar();

    for( int i = 0 ; i < 1000 ; ++i )
    {
        cb.push(new D(i));      //one producer so no lock required

        cond.notify_one();     // no lock required here ?
    }
}

I am wondering if it is ok if my data container has his own lock to avoid data race, and on an other hand boost::wait use his lock/mutex mechanism as it specified by boost documentation ?

Otherwise, thread1 is the consummer, in case I have only one thread which "consumme" it seems that the lock required by wait is a little bit superfluous, isn't it ?

EDIT: I dont' take care about missing update.When I receive an update I update an object with the received data. I just want the fresher update, not necesarilly all the udpate


You can have as many locks as you want, but you'll get race conditions unless both the pop and the push are protected by the same mutex as the wait and the notify (and the the lock is not freed between the decision to wait and the actual wait). The standard idiom is:

//  thread #1
// ...
{
    boost::unique_lock<boost::mutex> lock( mutex );
    while ( !cb.pop( d ) )
        cond.wait( mutex );
}
// process d


//  thread #2
// ...
{
    boost::unique_lock<boost::mutex> lock( mutex );
    cb.push( new D(i) );
    cond.notify_one();
}

Trying to loop on the pop in thread #1 is more complicated, at least if you want to free the lock during processing of d. You'd need something like:

boost::unique_lock<boost::mutex> lock( mutex );
while ( cb.pop( d ) ) {
    lock.unlock();
    //  process d
    lock.lock();
}
cond.wait( mutex );

It's more complicated, and I don't see what you'd gain from it. Just use the usual pattern, which is known to work reliably.

FWIW: your code is full of race conditions: for starters: the pop fails in thread 1, there's a context switch, thread 2 does the push and the notify, then back to thread 1, which does the cond.wait. And waits, despite the fact that there's something in the queue.

I might add that there is almost never any justification for types like circular buffers to manage their own mutex locking. The granularity is too low. The exception is if the pop instruction actually waits until something is there, i.e. (based on std::deque):

T* CircularBuffer::push( std::auto_ptr<T> in )
{
    boost::unique_lock<boost::mutex> l( myMutex );
    myQueue.push_back( in.get() );
    in.release();  // Only after the push_back has succeeded!
    myCondition.notify_all();
}

std::auto_ptr<T> CircularBuffer::pop()
{
    boost::unique_lock<boost::mutex> l( myMutex );
    while ( myQueue.empty() ) {
        myCondition.wait();
    }
    std::auto_ptr<T> result( myQueue.front() );
    myQueue.pop_front();
    return result;
}

(Note the use of auto_ptr in the interface. Once the provider has passed the object into the queue, it no longer has a right to access it.)


The condvar must use the mutex that protects the data (well, not exactly, more precisely below), otherwise you are going to miss updates:

producer             consumer

                     while(cb.pop()) ...;
cb.push();
cond.notify_one();
                     cond.wait(); // OOPS. I missed the notification!

To avoid this, you must in the consumer:

  1. Lock mutex
  2. Verify that the condition is not satisfied
  3. cond.wait(mutex);
  4. Back to verifying the condition (the mutex is again locked)

and in producer you must:

  1. Lock mutex
  2. Make the condition true (i.e. cb.push())
  3. cond.notify_one()
  4. Now you may finally unlock mutex.

So it does not necessarily have to be the lock that protects the data, but you have to lock across the last check and wait in consumer and across setting the condition and notify in producer.

On a side-note, it's actually possible to create a notification mechanism that does not need to cooperate with a lock. It needs separate operations for "sign me up for signal" and "wait for signal", where the later immediately returns when the signal occurred since the first (and you check the condition between them). I have however not seen such mechanism in any portable threading library.

Edit: On yet another side note, semaphores may be more suitable for managing message queue. Have a semaphore that reflects number of items in the queue. You up it for every push and down it before every pop (or just embed it in the queue itself, so pop will simply wait for something to appear if the queue is empty).


If the push and pop functions of your ring-buffer are thread-safe, then you do not need additional synchronization.

If you had multiple readers, you could use a readers/writers lock to enable multiple threads to read at the same time.


Normally, condition variables should be used to signal a condition shared between threads so the condition should be accessed in a thread safe way. However, the mutex needs to be unlocked while waiting, so other threads can change the condition. See here for an example with a queue.

In your case, you already have a thread safe container. Wouldn't it be better to put the condtion variable in your container and let it use its mutex?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜