开发者

locking on a cache key

I've read sev开发者_开发百科eral questions similar to this, but none of the answers provide ideas of how to clean up memory while still maintaining lock integrity. I'm estimating the number of key-value pairs at a given time to be in the tens of thousands, but the number of key-value pairs over the lifespan of the data structure is virtually infinite (realistically it probably wouldn't be more than a billion, but I'm coding to the worst case).

I have an interface:

public interface KeyLock<K extends Comparable<? super K>> {

  public void lock(K key);

  public void unock(K key);

}

with a default implementation:

public class DefaultKeyLock<K extends Comparable<? super K>> implements KeyLock<K> {

  private final ConcurrentMap<K, Mutex> lockMap;

  public DefaultKeyLock() {
    lockMap = new ConcurrentSkipListMap<K, Mutex>();
  }

  @Override
  public void lock(K key) {
    Mutex mutex = new Mutex();
    Mutex existingMutex = lockMap.putIfAbsent(key, mutex);
    if (existingMutex != null) {
      mutex = existingMutex;
    }
    mutex.lock();
  }

  @Override
  public void unock(K key) {
    Mutex mutex = lockMap.get(key);
    mutex.unlock();
  }

}

This works nicely, but the map never gets cleaned up. What I have so far for a clean implementation is:

public class CleanKeyLock<K extends Comparable<? super K>> implements KeyLock<K> {

  private final ConcurrentMap<K, LockWrapper> lockMap;

  public CleanKeyLock() {
    lockMap = new ConcurrentSkipListMap<K, LockWrapper>();
  }

  @Override
  public void lock(K key) {
    LockWrapper wrapper = new LockWrapper(key);
    wrapper.addReference();
    LockWrapper existingWrapper = lockMap.putIfAbsent(key, wrapper);
    if (existingWrapper != null) {
      wrapper = existingWrapper;
      wrapper.addReference();
    }

    wrapper.addReference();
    wrapper.lock();
  }

  @Override
  public void unock(K key) {
    LockWrapper wrapper = lockMap.get(key);
    if (wrapper != null) {
      wrapper.unlock();
      wrapper.removeReference();
    }
  }

  private class LockWrapper {

    private final K key;

    private final ReentrantLock lock;

    private int referenceCount;

    public LockWrapper(K key) {
      this.key = key;
      lock = new ReentrantLock();
      referenceCount = 0;
    }

    public synchronized void addReference() {
      lockMap.put(key, this);
      referenceCount++;
    }

    public synchronized void removeReference() {
      referenceCount--;
      if (referenceCount == 0) {
        lockMap.remove(key);
      }
    }

    public void lock() {
      lock.lock();
    }

    public void unlock() {
      lock.unlock();
    }
  }

}

This works for two threads accessing a single key lock, but once a third thread is introduced the lock integrity is no longer guaranteed. Any ideas?


I don't buy that this works for two threads. Consider this:

  • (Thread A) calls lock(x), now holds lock x
  • thread switch
  • (Thread B) calls lock(x), putIfAbsent() returns the current wrapper for x
  • thread switch
  • (Thread A) calls unlock(x), the wrapper reference count hits 0 and it gets removed from the map
  • (Thread A) calls lock(x), putIfAbsent() inserts a new wrapper for x
  • (Thread A) locks on the new wrapper
  • thread switch
  • (Thread B) locks on the old wrapper

How about:

  • LockWrapper starts with a reference count of 1
  • addReference() returns false if the reference count is 0
  • in lock(), if existingWrapper != null, we call addReference() on it. If this returns false, it has already been removed from the map, so we loop back and try again from the putIfAbsent()


I would use a fixed array by default for a striped lock, since you can size it to the concurrency level that you expect. While there may be hash collisions, a good spreader will resolve that. If the locks are used for short critical sections, then you may be creating contention in the ConcurrentHashMap that defeats the optimization.

You're welcome to adapt my implementation, though I only implemented the dynamic version for fun. It didn't seem useful in practice so only the fixed was used in production. You can use the hash() function from ConcurrentHashMap to provide a good spreading.

ReentrantStripedLock in, http://code.google.com/p/concurrentlinkedhashmap/wiki/IndexableCache

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜