EhCache BigMemory vs Diskstore on RAM disk
How is the performance of BigMemory of Enterprise Ehcache compared to Diskstore of Ehcache Community Edition used with RAM disk?
Big Memory permits caches to use an additional type of memory store outside the object heap there by reducing the overhead of GC, had we used all of RAM in object heap. Serialization and deserialization does take place on putting and getting from this off-heap store.
Similarly Diskstore is also second level cache that stores the serialized object on disk.
On the link above it is mentioned that off-heap store is two order of magnitude faster then Diskstore. What happens if I configure the Diskstore to store data in RAM Disk? Will BigMemo开发者_高级运维ry still have noticeable performance benefit?
Are there some other optimizations done by BigMemory? Has anyone come across any such experiments that compare the two approaches?
Following is excerpt of the answer given to this question on terracotta forum.
"The three big problems I'd expect you to face with open source (community edition) Ehcache disk stores are: Firstly in open source only the values are stored on disk - the keys and the meta data to map keys to values is still stored in heap (which is not true for BigMemory). This means the heap would still be the limiting factor on cache size. Secondly the open source disk store is designed to be backed by a single (conventionally spinning disk - although some people do use SSD drives now), this means the backend is less concurrent (especially with regard to writing) than Enterprise BigMemory since the bottleneck is expected to be at the hardware level. Thirdly the serialization performed by the open source disk store is less space efficient so serialized values have much larger overheads."
精彩评论