Design options for a C++ thread-safe object cache
I'm in the process of writing a template library for data-caching in C++ where concurrent read can be done and concurrent write too, but not for the same key. The pattern can be explained with the following environment:
- A mutex for the cache write.
- A mutex for each key in the cache.
This way if a thread requests开发者_运维技巧 a key from the cache and is not present can start a locked calculation for that unique key. In the meantime other threads can retrieve or calculate data for other keys but a thread that tries to access the first key get locked-wait.
The main constraints are:
- Never calculate the value for a key at the same time.
- Calculating the value for 2 different keys can be done concurrently.
- Data-retrieval must not lock other threads from retrieve data from other keys.
My other constraints but already resolved are:
- fixed (known at compile time) maximum cache size with MRU-based ( most recently used ) thrashing.
- retrieval by reference ( implicate mutexed shared counting )
I'm not sure using 1 mutex for each key is the right way to implement this but i didn't find any other substantially different way.
Do you know of other patterns to implements this or do you find this a suitable solution? I don't like the idea of having about 100 mutexs. ( the cache size is around 100 keys )
You want to lock and you want to wait. Thus there shall be "conditions" somewhere (as pthread_cond_t
on Unix-like systems).
I suggest the following:
- There is a global mutex which is used only to add or remove keys in the map.
- The map maps keys to values, where values are wrappers. Each wrapper contains a condition and potentially a value. The condition is signaled when the value is set.
When a thread wishes to obtain a value from the cache, it first acquires the global mutex. It then looks in the map:
- If there is a wrapper for that key, and that wrapper contains a value, then the thread has its value and may release the global mutex.
- If there is a wrapper for that key but no value yet, then this means that some other thread is currently busy computing the value. The thread then blocks on the condition, to be awaken by the other thread when it has finished.
- If there is no wrapper, then the thread registers a new wrapper in the map, and then proceeds to computing the value. When the value is computed, it sets the value and signals the condition.
In pseudo code this looks like this:
mutex_t global_mutex hashmap_t map lock(global_mutex) w = map.get(key) if (w == NULL) { w = new Wrapper map.put(key, w) unlock(global_mutex) v = compute_value() lock(global_mutex) w.set(v) signal(w.cond) unlock(global_mutex) return v } else { v = w.get() while (v == NULL) { unlock-and-wait(global_mutex, w.cond) v = w.get() } unlock(global_mutex) return v }
In pthreads
terms, lock
is pthread_mutex_lock()
, unlock
is pthread_mutex_unlock()
, unlock-and-wait
is pthread_cond_wait()
and signal
is pthread_cond_signal()
. unlock-and-wait
atomically releases the mutex and marks the thread as waiting on the condition; when the thread is awaken, the mutex is automatically reacquired.
This means that each wrapper will have to contain a condition. This embodies your various requirements:
- No threads holds a mutex for a long period of time (either blocking or computing a value).
- When a value is to be computed, only one thread does it, the other threads which wish to access the value just wait for it to be available.
Note that when a thread wishes to get a value and finds out that some other thread is already busy computing it, the threads ends up locking the global mutex twice: once in the beginning, and once when the value is available. A more complex solution, with one mutex per wrapper, may avoid the second locking, but unless contention is very high, I doubt that it is worth the effort.
About having many mutexes: mutexes are cheap. A mutex is basically an int
, it costs nothing more than the four-or-so bytes of RAM used to store it. Beware of Windows terminology: in Win32, what I call here a mutex is deemed an "interlocked region"; what Win32 creates when CreateMutex()
is called is something quite different, which is accessible from several distinct processes, and is much more expensive since it involves roundtrips to the kernel. Note that in Java, every single object instance contains a mutex, and Java developers do not seem to be overly grumpy on that subject.
You could use a mutex pool instead of allocating one mutex per resource. As reads are requested, first check the slot in question. If it already has a mutex tagged to it, block on that mutex. If not, assign a mutex to that slot and signal it, taking the mutex out of the pool. Once the mutex is unsignaled, clear the slot and return the mutex to the pool.
One possibility that would be a much simpler solution would be to use a single reader/writer lock on the entire cache. Given that you know there is a maximum number of entries (and it is relatively small), it sounds like adding new keys to the cache is a "rare" event. The general logic would be:
acquire read lock
search for key
if found
use the key
else
release read lock
acquire write lock
add key
release write lock
// acquire the read lock again and use it (probably encapsulate in a method)
endif
Not knowing more about the usage patterns, I can't say for sure if this is a good solution. It is very simple, though, and if the usage is predominantly reads, then it is very inexpensive in terms of locking.
精彩评论