How to deal with race conditions in multi-threading?
Here's an example:
if开发者_如何转开发 (control.InvokeRequired)
{
control.BeginInvoke(action, control);
}
else
{
action(control);
}
What if between the condition and the BeginInvoke call the control is disposed, for example?
Another example having to do with events:
var handler = MyEvent;
if (handler != null)
{
handler.BeginInvoke(null, EventArgs.Empty, null, null);
}
If MyEvent
is unsubscribed between the first line and the if statement, the if statement will still be executed. However, is that proper design? What if with the unsubscription also comes the destruction of state necessary for the proper invocation of the event? Not only does this solution have more lines of code (boilerplate), but it's not as intuitive and can lead to unexpected results on the client's end.
What say you, SO?
In my opinion, if any of this is an issue, both your thread management and object lifecycle management are reckless and need to be reexamined.
In the first example, the code is not symmetric: BeginInvoke
will not wait for action
to complete, but the direct call will; this is probably a bug already.
If you expect yet another thread to potentially dispose the control you're working with, you've no choice but to catch the ObjectDisposedException
-- and it may not be thrown until you're already inside action
, and possibly not on the current thread thanks to BeginInvoke
.
It is improper to assume that once you have unsubscribed from an event you will no longer receive notifications on it. It doesn't even require multiple threads for this to happen -- only multiple subscribers. If the first subscriber does something while processing a notification that causes the second subscriber to unsubscribe, the notification currently "in flight" will still go to the second subscriber. You may be able to mitigate this with a guard clause at the top of your event handler routine, but you can't stop it from happening.
There are a few techniques for resolving a race condition:
- Wrap the whole thing with a mutex. Make sure that there's a lock that each thread must first acquire before it can even start running in the race. That way, as soon as you get the lock, you know that no other thread is using that resource and you can complete safely.
- Find a way to detect and recover from them; This can be very tricky, but some kinds of application work well; A typical way of dealing with this is to keep a counter of the number of times a resource has changed; If you get finished with a task and find that the version number is different from when you started, read the new version and start the task over from the beginning (or just fail)
- redesign the application to use only atomic actions; basically this means using operations that can be completed in one step; this often involves "compare-and-swap" operations, or fitting all of the transaction's data into a single disk block that can be written atomically.
- redesign the application to use lock-free techniques; This option only makes sense when satisfying hard, real-time constrains is more important than servicing every request, because lock-free designs inherently lose data (usually of some low priority nature).
Which option is "right" depends on the application. Each option has performance trade-offs that may make the benefit of concurrency less appealing.
If this behavior is spreading multiple places in your application, it might deserve to re-design the API, which looks like:
if(!control.invokeIfRequired()){
action(action);
}
Just the same idea as standard JDK library ConcurrentHashMap.putIfAbsent(...)
. Of course, you need to deal with synchronization inside this new control.invokeIfRequired()
method.
精彩评论