开发者

Java Caching on distributed environment

I am supposed to create a simple replicated cache using java for internal purpose which will be used in a distributed environment. I have seen oracle has implemented Replicated Cache Service.

The problem I am facing is while doing an update o开发者_高级运维r remove, I acquire lock on other cache's to the point the cache get's updated and notifies others of the change. This is eventually going into a dead lock situation, while removing. Is there any strategy I should follow while updating or removing from cache's.

  • Can I implement a replicated cache without having a primary cache?


You can checkout Gigaspaces XAP which is a fully transactional distributed in memory data grid that supports along many other things, fully replicated topologies.

Disclaimer - I work for gigaspaces.

Eitan


How about Java Caching System, JCS: http://jakarta.apache.org/jcs/

I've tested Lateral TCP Cache (with UDP Discovery service) http://jakarta.apache.org/jcs/LateralTCPProperties.html


Ehcache uses a different architecture with peers and synchronizing using multicasts. Check the documentation


JCS should do the job for you as it comes with good flexibility in terms of design configurations. Also the attractive feature is the lazy loading of data when we speak of replication where only the required data is replicated based on a get request for that object. This reduces memory footprint.

Please look up Remote Cache Server with JCS in the apache JCS site.


I would recommend using MemCached. It has out of process memory storage (on dedicated cache server(s)). Also the cache server is written in C/C++ and at runtime is achieving good performance with low CPU hit and good memory utilization. See: http://memcached.org

There is a pretty good Java client to connect to the server. See: http://code.google.com/p/spymemcached

Regarding multiple cache servers and selection mechanism to which server to go... here is portion from the article below:

In its default configuration, the Memcached client uses very simple logic to select the server for a get or set operation. When you make a get() or set() call, the client takes the cache key and call its hashCode() method to get an integer such as 11. It then takes that number and divides it by number of available Memcached servers, say two. It then takes the value of the remainder, which is 1 in this case. The cache entry will go to Memcached server 1. This simple algorithm ensures that the Memcached client on each of your application servers always chooses the same server for a given cache key.

And the article is here:

Use Memcached for Java enterprise performance, Part 1: Architecture and setup http://www.javaworld.com/javaworld/jw-04-2012/120418-memcached-for-java-enterprise-performance.html

Use Memcached for Java enterprise performance, Part 2: Database-driven web apps http://www.javaworld.com/javaworld/jw-05-2012/120515-memcached-for-java-enterprise-performance-2.html

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜