Concurrency strategy configuration for JBoss TreeCache as 2nd level Hibernate cache
I am using JBoss EAP 4.3.
I'm currently looking into the different options for concurrency strategy when using the built-in JBoss TreeCache as a second level cache for Hibernate. I have set it up and I have verified that the cache is working by looking into the logs, but I am not 开发者_开发知识库sure what concurrency strategy is really used and how it is intended to work.
For each Entity, I can set one of the following "usage" values in the @Cache
annotation: NONE
, READ_ONLY
, NONSTRICT_READ_WRITE
, READ_WRITE
, TRANSACTIONAL
.
On the other hand, in my JBossTreeCache
configuration file I can set IsolationLevel
to one of the following for the entire cache: NONE
, READ_UNCOMMITTED
, READ_COMMITTED
, REPEATABLE_READ
, SERIALIZABLE
(or just use OPTIMISTIC
).
When looking into the configuration options one at a time, the documentation is quite clear, but I wonder what happens when you combine the different options.
For example, if you set @Cache(usage = CacheConcurrencyStrategy.TRANSACTIONAL)
for an entity but configure NONE
as IsolationLevel
for the JBossTreecache
, what happens?
I also believe that JBossTreeCache
only supports NONE
, READ_ONLY
and TRANSACTIONAL
usage, but what IsolationLevel
are you allowed to combine them with? And what happens if you use for example NONSTRICT_READ_WRITE
?
Alltogether there should be like 5x6 different combinations here, but not all of them makes sense..
Can anyoone help me sorting this out?
Isolation level is a tricky issue.
For example, if you set
@Cache(usage = CacheConcurrencyStrategy.TRANSACTIONAL)
for an entity but configureNONE
asIsolationLevel
for theJBossTreecache
, what happens?
Mostly, hard-to-find bugs in production... You should understand that by using read-write cache you essentially mire yourself in distributed transactions with all their 'niceties'.
Ok, about combinations: Read-only cache setting in Hibernate should be used when your objects do not change. For example, for a country dictionary. Cache concurrency level NONE or READ_ONLY should be used with it.
Non-strict-read-write should be used when your cached objects change, but that happens rarely and chances for race conditions are small. For example, for a timezone dictionary - timezones might appear/disappear occasionally, but that happens may be a couple times a year. Again, cache concurrency level NONE or READ_ONLY should be used with it.
Now, to more interesting combinations.
Transactional
caches in Hibernate are NOT safe, Hibernate assumes that cache updates are transactional but does nothing to ensure it. So you MUST use a full-blown external XA (distributed transactions) coordinator, and you really really really do not want it unless you really really know what you're doing. Most likely, you'll have to use the full EJB3 container for XA-manager support though it's possible to use external transaction manager like http://www.atomikos.com/ with plain servlets + Spring. Obviously, you need to use TRANSACTIONAL
caches with it.
'READ_WRITE` is an interesting combination. In this mode Hibernate itself works as a lightweight XA-coordinator, so it doesn't require a full-blown external XA. Short description of how it works:
- In this mode Hibernate manages the transactions itself. All DB actions must be inside a transaction, autocommit mode won't work.
- During the
flush()
(which might appear multiple time during transaction lifetime, but usually happens just before the commit) Hibernate goes through a session and searches for updated/inserted/deleted objects. These objects then are first saved to the database, and then locked and updated in the cache so concurrent transactions can neither update nor read them. - If the transaction is then rolled back (explicitly or because of some error) the locked objects are simply released and evicted from the cache, so other transactions can read/update them.
- If the transaction is committed successfully, then the locked objects are simply released and other threads can read/write them.
There are couple of fine points here:
Possible repeatable read violation. Imagine that we have Transaction A (tA) and Transaction B (tB) which start simultaneously and both load object X, tA then modifies this object and then tA is committed. In a lot of databases which use snapshot isolation (Oracle, PostgreSQL, FireBird), if tB requests object X again it should receive the same object state as in the beginning of the transaction. However, READ_WRITE cache might violate this condition - there's no snapshot isolation there. Hibernate tries to work around it by using timestamps on cached objects but on OSes with poor timer resolution (15.6ms on Windows) it is guaranteed to let some races slip through.
Possible optimistic stale object versions - it IS possible to get stale object versions if you're very unlucky to work on Windows, and have several transactions commit with the same timestamp.
精彩评论