开发者

Windows Xperf diskio action does not show me a file that a program reads during the performance trace session

I run xperf in order to get trace info for a program when I run it. The program reads a file. It is a .NET program written in F#, the file is read here:

System.IO.File.ReadAllLines("MyReadFile.txt")

Well. I run xperf:

xperf -on DiagEasy

I stop xperf and merge in a file:

xperf -d myfile.etl

OK.

Now I write:

xperf -i myfile.etl -o myfile_stat.txt -a diskio -detail

I do this so I can get a file with all info about files. The file shown is a text file formatted in order to let me see disk statistics by file. Each file which has been manipulated during the trace session is shown with much data regarding the process which read/wrote the file开发者_StackOverflow中文版 and so on...

But MyReadFile.txt does not appear there.

Why????? Is it because the cpu sampling frequency is too low? How can I change it?...

However, my program reads the file, I'm sure, the program starts and prints out the content...

Thanks


DiagEasy turns on the ETW instrumentation for IO to/from the disk. If the file is already in memory, then there will be no IO. You need to turn on the FILE_IO and FILE_IO_INIT events as described by Gary above to capture all file accesses, even to files currently in memory.

You may be asking why the file is in memory though. There are two ways the file can be in memory when you collected the data.

You have accessed the file oreviously since you booted the system, either to read it, or to write it. The file will remain in memory until there is enough demand for memory that these file pages are pushed from RAM. Since these are file backed pages, any that have been modified will be written to the file (MyReadFile.txt) before the pages are xero'ed and then given to a process for use.

The second way the file could be in memory is that SuperFetch saw repetative accesses to this file and proactively loaded it into memory when the disk was otherwise idle. This is done to remove the latency accesses to the file would have experienced in reading data from disk.


File I/O monitoring isn't based on sampling. Instead the relevant ETW provider raises events for each monitored I/O. It shouldn't miss anything.

If this were my code I'd suspect it hadn't really read the file. ERROR_FILE_NOT_FOUND, perhaps?

Also, that flag should be DiagEasy, not EasyDiag.

FWIW, here's how I do file monitoring, with stack traces enabled:

xperf -on PROC_THREAD+LOADER+FILE_IO+FILE_IO_INIT+FILENAME -stackwalk FileCreate+FileRead+FileWrite+FileFlush+FileQueryInformation+FileSetinformation+FileDelete

Regards, Gary

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜