Should the Double Check Lock pattern be used with ReaderWriteLockSlim?
Is the double check lock pattern necessary when using ReaderWriterLockSlim
?
Consider this scenario: I have a dictionary
. Things can be added to it. But things can't be removed from it. When things are added, it can be a very expensive operation in terms of time (only hundreds of milliseconds, but still costly relative to the rest of the app) If I wanted to add something and it wasn't already there, would there be anything to gain by:
- first acquiring a read lock, then checking for existence,
- then entering an upgradable read lock, and checking again,
- then entering a write lock if the item is still not present in the dictionary?
Something like the following:
void populateIfNotPresent( object thing )
{
_lock.EnterReadLock( ) ;
bool there = _dictionary.ContainsKey(thing);
_lock.ExitReadLock( ) ;
// Remember, the specs say nothing can be removed from this dictionary.
if (!there)
{
_lock.EnterUpgradeableReadLock( ) ;
try
开发者_StackOverflow社区 {
if( !_dictionary.ContainsKey( thing ) )
{
_lock.EnterWriteLock( ) ;
try
{
populate( thing ) ;
}
finally
{
_lock.ExitWriteLock( ) ;
}
}
}
finally
{
_lock.ExitUpgradeableReadLock( ) ;
}
}
}
The docs say only one thread at a time can enter an upgradable read lock, but doesn't stop any other threads from entering a read lock, so it appears that there is value in the double check lock.
What do you think? Is this overkill?
The ReaderWriterLockSlim
class (like any other reader-writer lock) is meant for a high number of reads compared to the number of writes.
What you are doing is actually triple-checking, and it's superfluous; you might as well enter an ungradable write lock. If the item exists, then exit the lock, otherwise, upgrade to a write lock.
Your method indicates that the read is not providing any value here, since you have a good possibility of performing the write. Since the upgradable write won't block any other reads, it shouldn't kill you here.
However, if this is the only place that you are doing reads/writes (or the majority take place here) then there's a problem, your ratio of reads/writes isn't high enough to warrant a read/write lock and you should look to some other synchronization method.
That said, in the end, it's all about testing the performance; if you're going to optimize an implementation, you need to measure it's current performance in order to make a comparison against, otherwise, it's just premature optimization.
so it appears that there is value in the double check lock.
How much value? Well, if you're expecting to see a lot of cache misses, then it might make sense to perform the upgradable lock; however, if you're not expecting to see a lot of cache misses, then you're doing needless locking. In general, I would go with the simplest solution possible that gets the job done. Optimizing the locks is usually not where you will get the biggest bang for your buck, look for bigger things to optimize first.
Suggestion:
Something that might give you a lot more bang for your buck is a Striped Dictionary (Java's StripedMap is a pretty good place to start and it shouldn't be very hard to understand).
The basic idea behind the StripedMap
/StripedDictionary
is that you have an array of locks:
object[] syncs = new object[n]();
// also create n new objects
You should stripe your map with sufficiently large number of stripes in order to allow the number of threads you have to enter the method without collision. I don't have any data to back this up, but suppose you're expecting up to 8 threads to enter the map, then you could probably use 8 or more locks (stripes) in order to ensure that all 8 threads can enter the map simultaneously. If you want better "insurance" against "collisions", then create more stripes, say 32 or 64.
When you enter the populateIfNotPresent
method, you lock on one of those locks depending on the hash code:
void populateIfNotPresent( object thing )
{
lock(syncs[thing.GetHashCode()%syncs.Length])
{
if(!dictionary.ContainsKey(thing))
{
populate(thing);
}
}
}
Suppose you have 8 stripes, now you're allowing up to 8 threads to safely enter and do an expensive operation, which would have otherwise blocked the other 7 threads. The assumption, of course, is that the hashing function is robust enough to provide hashes with low probability of duplication.
You already expect populateIfNotPresent
to be expensive IF the item is not present, but if you have a striped dictionary, then you can have multiple threads work on different sectors of the dictionary without bumping into each other. This will give you a much greater benefit then shaving off a couple of CPU cycles from checking if the object exists, because the expensive operation is when the object does exist.
精彩评论