Linux, monitor read rates of files
I have a custom application which has a bunch of files open. I can see the file handles open by a process using "lsof" and I can see the files being accessed using "watch -d 'ls -alh'" and watching the mtime/ctime. However, I would like to see the rate of data that is being read/written to these files. IE: I need to determine if one file is being read at 100 Mbps and maxing out a disk. Subsequently, are there several files which are being written at 1 Mbps? Looking at the throughput for a specific disk isn't too useful as I need to narrow down which file is being hammered.
I'm afraid th开发者_运维知识库ere is also a catch; ideally I need to determine this without installing any other software or writing scripts... Simply because this is one of those "very-production" systems.
Does anybody know of a way? Many thanks in advance for any suggestions.
Check out strace
. It can attach to running processes and tell you exactly what syscalls they execute and what the parameters are - with a small interpreter script, you can deduce exactly how many bytes are being read from which file handle while you watch.
精彩评论