Why would changing the filesystem type from XFS to JFS increase mmap file write performance?
I have been playing around with different filesystems and comparing the performance of the vari开发者_如何学Cous filesystems when using mmap.
I am suprised that changing to JFS doubled the write performance straight off. I thought writes were done to the page cache and so when a write is done the app keeps moving on quickly? is it actually a synchronous operation under linux?
A slight increase in read performance, but not as significant.
Writes are done straight to the page cache, but the first time you hit each page with a write will cause a minor fault to mark the page as dirty. At this point the filesystem has the chance to perform some work - in the case of xfs
, this involves delayed allocation accounting and extent creation. You could try preallocating the entire file beforehand to see how/if this changes things. (jfs
uses the generic mmap operations, which does not supply a callback used when a page is made writeable).
Note also that once the proportion of dirty pagecache pages exceeds /proc/sys/vm/dirty_ratio
, the kernel will switch from background asynchronous writeback to synchronous writeback of dirty pages by the process that dirtied them.
One significant difference between XFS and JFS is that XFS supports barriers and enables them by default, but JFS doesn't support barriers at all. Hence JFS is unsafe (but fast!) when running on disks with write-back cache.
JFS having better write performance in your tests might be an effect of this.
Perhaps you should look at the benchmarks for each filesystem. Each FS is fast at certain conditions AFAIK.
http://fsbench.netnation.com/ was one of the first hits in my Google for xfs jfs benchmarks. Skimming at the results appears to suggest xfs fares better at speed on many occasions.
I suggest you run the benchmarks on the target machines to find out for yourself.
One guess is, the speedup you noticed could very well be in the best case areas of jfs.
精彩评论