Linux buffer cache effect on IO writes?
I'm copying large files (3 x 30G) between 2 filesystems on a Linux server (kernel 2.6.37, 16 cores, 32G RAM) and I'm getting poor performance. I suspect that the usage of the buffer cache is killing the I/O performance.
I've written a small C program to replicate the problem. The program writes 20G of zero bytes directly to a SAS disk (/dev/sda, no filesystem). It also supports the O_DIRECT flag.
When I run the program with O_DIRECT a get a very steady and predictable performance:
/dev/sda: 100M current_rate=195.569950M/s avg_rate=195.569950M/s
/dev/sda: 200M current_rate=197.063362M/s avg_rate=196.313815M/s
/dev/sda: 300M current_rate=200.479145M/s avg_rate=197.682893M/s
/dev/sda: 400M current_rate=210.400076M/s avg_rate=200.715853M/s
...
/dev/sda: 20100M current_rate=206.102701M/s avg_rate=201.217154M/s
/dev/sda: 20200M current_rate=206.485716M/s avg_rate=201.242573M/s
/dev/sda: 20300M current_rate=197.683935M/s avg_rate=201.224729M/s
/dev/sda: 20400M current_rate=200.772976M/s avg_rate=201.222510M/s
Without O_DIRECT is a different story:
/dev/sda: 100M current_rate=1323.171377M/s avg_rate=1323.171377M/s
/dev/sda: 200M current_rate=1348.181303M/s avg_rate=1335.559265M/s
/dev/sda: 300M current_rate=1351.223533M/s avg_rate=1340.740178M/s
/dev/sda: 400M current_rate=1349.564091M/s avg_rate=1342.935321M/s
...
/dev/sda: 20100M current_rate=67.203804M/s avg_rate=90.685743M/s
/dev/sda: 20200M current_rate=68.259013M/s avg_rate=90.538482M/s
/dev/sda: 20300M current_rate=64.882401M/s avg_rate=90.362464M/s
/dev/sda: 20400M current_rate=65.412577M/s avg_rate=90.193827M/s
I understand that the initial throughtput is high because that data is cached and committed later to disk. However I don't expect the overall performance using the buffer cache to be 50% less than with O_DI开发者_运维百科RECT.
I also did tests with dd, I get similar results (I used 10G though here instead of 20G):
$ dd if=/dev/zero of=/dev/sdb bs=32K count=327680 oflag=direct
327680+0 records in
327680+0 records out
10737418240 bytes (11 GB) copied, 54.0547 s, 199 MB/s
$ dd if=/dev/zero of=/dev/sdb bs=32K count=327680
327680+0 records in
327680+0 records out
10737418240 bytes (11 GB) copied, 116.993 s, 91.8 MB/s
Are there any kernel tunings that could fix/minimize the problem?
The buffer cache is quite efficient, even when buffering huge amounts of data.
Running your dd test on an enterprise SSD, I can easily do over 1GBps of 32KB writes through the buffer cache.
I find your results interesting, but I don't think your problem is "buffer cache too slow".
My first question would be: is it slow because you're CPU-limited or disk-limited? Check if you have one CPU core pegged at 100% during the test-- this might indicate that there's something wrong at the driver or block level, like an I/O elevator that's misbehaving. If you find a core pegged run some profiles to see what that core is up to.
If you're disk-limited you might want to investigate what the I/Os look like at the device level (use blktrace?) and see if you can figure out if the resulting I/O pattern gives poor performance at the device level.
Also, you might want to consider using something like fio
to run your tests, instead of inventing your own benchmark program-- it'll be easier for others to reproduce your results and trust your program isn't at fault.
精彩评论