What are Monitor.Pulse And Monitor.Wait advantages?
I'm kinda new to concurrent programming, and trying to understand the benefits of using Monitor.Pulse and Monitor.Wait .
MSDN's example is the following:
class MonitorSample
{
const int MAX_LOOP_TIME = 1000;
Queue m_smplQueue;
public MonitorSample()
{
m_smplQueue = new Queue();
}
public void FirstThread()
{
int counter = 0;
lock(m_smplQueue)
{
while(counter < MAX_LOOP_TIME)
{
//Wait, if the queue is busy.
Monitor.Wait(m_smplQueue);
//Push one element.
m_smplQueue.Enqueue(counter);
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
counter++;
}
}
}
public void SecondThread()
{
lock(m_smplQueue)
{
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
//Wait in the loop, while the queue is busy.
//Exit on the time-out when the first thread stops.
while(Monitor.Wait(m_smplQueue,1000))
{
//Pop the first element.
int counter = (int)m_smplQueue.Dequeue();
//Print the first element.
Console.WriteLine(counter.ToString());
//Release the waiting thread.
Monitor.Pulse(m_smplQueue);
}
}
}
//Return the number of queue elements.
public int GetQueueCount()
{
return m_smplQueue.Count;
}
static void Main(string[] args)
{
//Create the MonitorSample object.
MonitorSample test = new MonitorSample();
//Create the first thread.
Thread tFirst = new Thread(new ThreadStart(test.FirstThread));
//Create the second thread.
Thread tSecond = new Thread(new ThreadStart(test.SecondThread));
//Start threads.
tFirst.Start();
tSecond.Start();
//wait to the end of the two threads
tFirst.Join();
tSecond.Join();
//Print the number of queue elements.
Console.WriteLine("Queue Count = " + test.GetQueueCount().ToString());
}
}
and i cant see the benefit of using Wait And Pulse instead of this:
开发者_运维百科 public void FirstThreadTwo()
{
int counter = 0;
while (counter < MAX_LOOP_TIME)
{
lock (m_smplQueue)
{
m_smplQueue.Enqueue(counter);
counter++;
}
}
}
public void SecondThreadTwo()
{
while (true)
{
lock (m_smplQueue)
{
int counter = (int)m_smplQueue.Dequeue();
Console.WriteLine(counter.ToString());
}
}
}
Any help is most appreciated. Thanks
To describe "advantages", a key question is "over what?". If you mean "in preference to a hot-loop", well, CPU utilization is obvious. If you mean "in preference to a sleep/retry loop" - you can get much faster response (Pulse
doesn't need to wait as long) and use lower CPU (you haven't woken up 2000 times unnecessarily).
Generally, though, people mean "in preference to Mutex etc".
I tend to use these extensively, even in preference to mutex, reset-events, etc; reasons:
- they are simple, and cover most of the scenarios I need
- they are relatively cheap, since they don't need to go all the way to OS handles (unlike Mutex etc, which is owned by the OS)
- I'm generally already using
lock
to handle synchronization, so chances are good that I already have alock
when I need to wait for something - it achieves my normal aim - allowing 2 threads to signal completion to each-other in a managed way
- I rarely need the other features of Mutex etc (such as being inter-process)
There is a serious flaw in your snippet, SecondThreadTwo() will fail badly when it tries to call Dequeue() on an empty queue. You probably got it to work by having FirstThreadTwo() executed a fraction of a second before the consumer thread, probably by starting it first. That's an accident, one that will stop working after running these threads for a while or starting them with a different machine load. This can accidentally work error free for quite a while, very hard to diagnose the occasional failure.
There is no way to write a locking algorithm that blocks the consumer until the queue becomes non-empty with just the lock statement. A busy loop that constantly enters and exits the lock works but is a very poor substitute.
Writing this kind of code is best left to the threading gurus, it is very hard to prove it works in all cases. Not just absence of failure modes like this one or threading races. But also general fitness of the algorithm that avoids deadlock, livelock and thread convoys. In the .NET world, the gurus are Jeffrey Richter and Joe Duffy. They eat locking designs for breakfast, both in their books and their blogs and magazine articles. Stealing their code is expected and accepted. And partly entered into the .NET framework with the additions in the System.Collections.Concurrent namespace.
It is a performance improvement to use Monitor.Pulse/Wait, as you have guessed. It is a relatively expensive operation to acquire a lock. By using Monitor.Wait
, your thread will sleep until some other thread wakes your thread up with `Monitor.Pulse'.
You'll see the difference in TaskManager because one processor core will be pegged even while nothing is in the queue.
The advantages of Pulse
and Wait
are that they can be used as building blocks for all other synchronization mechanisms including mutexes, events, barriers, etc. There are things that can be done with Pulse
and Wait
that cannot be done with any other synchronization mechanism in the BCL.
All of the interesting stuff happens inside the Wait
method. Wait
will exit the critical section and put the thread in the WaitSleepJoin
state by placing it in the waiting queue. Once Pulse
is called then the next thread in the waiting queue moves to the ready queue. Once the thread switches to the Running
state it reenters the critical section. This is important to repeat another way. Wait
will release the lock and reacquire it in an atomic fashion. No other synchronization mechanism has this feature.
The best way to envision this is to try to replicate the behavior with some other strategy and then see what can go wrong. Let us try this excerise with a ManualResetEvent
since the Set
and WaitOne
methods seem like they may be analogous. Our first attempt might look like this.
void FirstThread()
{
lock (mre)
{
// Do stuff.
mre.Set();
// Do stuff.
}
}
void SecondThread()
{
lock (mre)
{
// Do stuff.
while (!CheckSomeCondition())
{
mre.WaitOne();
}
// Do stuff.
}
}
It should be easy to see that the code will can deadlock. So what happens if we try this naive fix?
void FirstThread()
{
lock (mre)
{
// Do stuff.
mre.Set();
// Do stuff.
}
}
void SecondThread()
{
lock (mre)
{
// Do stuff.
}
while (!CheckSomeCondition())
{
mre.WaitOne();
}
lock (mre)
{
// Do stuff.
}
}
Can you see what can go wrong here? Since we did not atomically reenter the lock after the wait condition was checked another thread could get in and invalidate the condition. In other words, another thread could do something that causes CheckSomeCondition
to start returning false
again before the following lock was reacquired. That can definitely cause a lot of weird problems if your second block of code required that the condition be true
.
精彩评论