开发者

Why high IO rate operations slow everything on linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.

Want to improve this question? Update the question so it's on-topic for Stack Overflow.

Closed 12 years ago.

Improve this question

This may be slightly OT, but I was wondering why having a process which heavily uses IO (say cp big file from one location to the other on the same disk) slows everything down, even processes which are mostly CPU bound. I noticed that on both OS I heavily use (mac os x and li开发者_Go百科nux).

In particular, I wonder why multi-core does not really help here: is it a hardware limitation for commodity hardware (disk controller, etc...), an os limitation, or is there something inherently hard into allocating the right resources (scheduling) ?


It could be a limitation of the current scheduler. Google "Galbraith's sched:autogroup patch" or "linux miracle patch" (yes really!). There's apparently a 200-line patch in the process of being refined and merged which adds group scheduling, about which Linus says:

I'm also very happy with just what it does to interactive performance. Admittedly, my "testcase" is really trivial (reading email in a web-browser, scrolling around a bit, while doing a "make -j64" on the kernel at the same time), but it's a test-case that is very relevant for me. And it is a huge improvement.

Before-and-after videos here.


Because, copying a large file (bigger than the available buffer cache) usually involves bringing it through the buffer cache, which generally causes less recently-used pages to be thrown out, which must then be brought back in.

Other processes which are doing tiny small amounts of occasional IO (say just stat'ing a directory) then get their caches all blown away and must do physical reads to bring those pages back in.

Hopefully this can get fixed by a copy-command which can detect this kind of thing and advise the kernel accordingly (e.g. with posix_fadvise) so that a large one-off bulk transfer of a file which does not need to be subsequently read does not completely discard all clean pages from the buffer cache, which now normally mostly happens.


A high rate of IO operations usually means a high rate of interrupts that must be serviced by the CPU, which takes CPU time.

In the case of cp, it also uses a considerable amount of the available memory bandwidth, as each block of data is copied to and from userspace. This will also tend to eject data required by other processes from the CPUs caches and TLB, which will slow down other processes as they take cache misses.


Also, would you know a way to validate your hypothesis on linux, e.g. number of interrupts while doing IO intensive operations.

To do with interrupts, I'm guessing that caf's hypothesis is:

  • many interrupts per second;
  • interrupts are serviced by any/all CPUs;
  • therefore, interrupts flush the CPU caches.

The statistics you'd need to test that would be the number of interrupts per second per CPU.

I don't know whether it's possible to tie interrupts to a single CPU: see http://www.google.com/#q=cpu+affinity+interrupt for further details.

Here's something I don't understand (this is the first time I've looked at this question): perfmon on my laptop (running Windows Vista) is showing 2000 interrupts/second (1000 on each core) when it's almost idle (doing nothing but displaying perfmon). I can't imagine which device is generating 2000 interrupts/second, and I would have thought that's enough to blow away the CPU caches (my guess is that the CPU quantum for a busy thread is something like 50 msec). It's also showing an average of 350 DPCs/sec.

Do high end hardware suffer from similar issues ?

One type of hardware difference might be the disk hardware and disk device driver, generating more or fewer interrupts and/or other contentions.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜