ReentrantReadWriteLock - many readers at a time, one writer at a time?
I'm somewhat new to multithreaded environments and I'm trying to come up with the best solution for the following situation:
I read data from a database once daily in the morning, and stores the data in a HashMap in a Singleton object. I have a setter method that is called only when an intra-day DB change occurs (which will happen 0-2 times a day).
I also have a getter which returns an element in the map, and this method is called hundreds of times a day.
I'm worried about the case where the getter is called while I'm emptying and recreating the HashMap, thus trying to find an element in an empty/malformed list. If I make these methods synchronized, it prevents two readers from accessing the getter at the same time, which could be a performance bottleneck. I don't want to take too much of a performance hit since writes happen so infrequently. If I use a ReentrantReadWriteLock, will this force a queue on anyone calling the getter until the write lock is released? Does it allow mult开发者_如何学JAVAiple readers to access the getter at the same time? Will it enforce only one writer at a time?
Is coding this just a matter of...
private final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private final Lock read = readWriteLock.readLock();
private final Lock write = readWriteLock.writeLock();
public HashMap getter(String a) {
read.lock();
try {
return myStuff_.get(a);
} finally {
read.unlock();
}
}
public void setter()
{
write.lock();
try {
myStuff_ = // my logic
} finally {
write.unlock();
}
}
Another way to achieve this (without using locks) is the copy-on-write pattern. It works well when you do not write often. The idea is to copy and replace the field itself. It may look like the following:
private volatile Map<String,HashMap> myStuff_ = new HashMap<String,HashMap>();
public HashMap getter(String a) {
return myStuff_.get(a);
}
public synchronized void setter() {
// create a copy from the original
Map<String,HashMap> copy = new HashMap<String,HashMap>(myStuff_);
// populate the copy
// replace copy with the original
myStuff_ = copy;
}
With this, the readers are fully concurrent, and the only penalty they pay is a volatile read on myStuff_ (which is very little). The writers are synchronized to ensure mutual exclusion.
Yes, if the write lock is held by a thread then other threads accessing the getter method would block since they cannot acquire the read lock. So you are fine here. For more details please read the JavaDoc of ReentrantReadWriteLock - http://download.oracle.com/javase/6/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html
You're kicking this thing off at the start of the day... you'll update it 0-2 times a day and you're reading it 100s of times per day. Assuming that the reading is going to take, say 1 full second(a looonnnng time) in an 8 hour day(28800 seconds) you've still got a very low read load. Looking at the docs for ReentrantReadWriteLock you can 'tweek' the mode so that it will be "fair", which means the thread that's been waiting the longest will get the lock. So if you set it to be fair, I don't think that your write thread(s) are going to be starved.
References
ReentrantReadWriteLock
精彩评论