开发者

How can you update a fragment cache while allowing reads from the cache during the update?

Is there a way to refresh a fragment cache in a way that allows reads from the cache while the update is taking place?

I'm caching a part of an html.erb view in Rails with t开发者_StackOverflow中文版he cache do .. end block in erb.

I'm expiring the same cache in the controller, with a call to expire_fragment(:controller => 'controllername')

I'm using memcached as the fragment cache store.

I could be wrong, but it looks like the default behavior is that the moment you call expire_fragment the fragment is deleted from the cached, so another request a split second after that for the same fragment will miss the cache.

What I would really like is for reads from the cache to keep taking place right up until the new fragment is computed and saved in the cached, at which point all subsequent requests get that new cached version.

This particular fragment is expensive to calculate. It take about 7 seconds.


I would assume memcached has some mechanism for handling read and write conflicts.

This issue will only matter on a very high traffic site.

Edit: Found a question "Is_memcached_atomic?" in the memcached FAQ:

All individual commands sent to memcached are absolutely atomic. If you send a set and a get in parallel, against the same object, they will not clober each other. They will be serialized and one will be executed before the other. Even in threaded mode, all commands are atomic. If they are not, it's a bug :)

I don't think the issue you describe will be much of a problem. Without going into the Rails internals, it is still possible in a multiple-vm Rails environment that there may be a cache miss, but the worst case scenario is simply that the fragment is generated by requests that happen to have the timing exactly right. Unless you fragment is hugely expensive (seconds rather than milliseconds) and your traffic and infrastructure is massive (multiple rails instances, hundreds of requests a second), I doubt it will be an issue.


You can also compute new values of the cached value in the background and then atomically switch readers to use the new value. For some applications, this can help out. Eg, it might take a while to write a large file that is cached. Using this technique enables the new version of the file to be written while the old one is still be served up from the cache. Here's the technique:

Use a version number as part the cache key. This can be done with either the fragment cache or memcache.

Steps:

  1. Add a 'cache_ver' to the appropriate model
  2. Include the cache_ver when computing the key for the cache. Remember that the fragment 'cache' method can use any string.
  3. To update the cache:
    1. compute next value of the cache_ver
    2. compute new value for the cache and store it in cache using new cache_ver as part of the key
    3. update the model's cache_ver. -- the next time a request comes in, the controller will look up the cache_ver and use the new value, and return the new results
    4. Don't forget to flush the old cached value at some point. Perhaps nightly....

The cache_ver can be stored in mem_cache if you want, instead of the db.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜