开发者

non blocking write to file in c/c++

I'm writing a logging program, and I need to read from serial once a second then print to a log file. The problem is that sometimes, something is holding up my loop and data is getting backed up. After timing every activity in my loop, I noticed that the function that prints my data to the log file is the one taking up too much time 开发者_如何学Csometimes. I was looking into non-blocking write to file, and according to this post:

File writing with overlapped IO vs file writing in a separate thread

The "write to files" should not be blocking my program by default. But it seems like they are.

I'm using MS visual studio EX and I'm writing a consol c++ app. Can someone tell me if fprintf and << are supposed to be non-blocking/asynchronous by default? If not, is there a way to make them so?


Here is how things work in Linux:

Writes/reads to/from regular files can't not be made blocking because of kernel buffering. When the kernel runs out of memory for buffering, however, they will block.

From The Linux Programming Interface: A Linux and UNIX System Programming Handbook:

Nonblocking mode can be used with devices (e.g., terminals and pseudoterminals), pipes, FIFOs, and sockets. (Because file descriptors for pipes and sockets are not obtained using open(), we must enable this flag using the fcntl() F_SETFL operation described in Section 5.3.)

O_NONBLOCK is generally ignored for regular files, because the kernel buffer cache ensures that I/O on regular files does not block, as described in Section 13.1. However, O_NONBLOCK does have an effect for regular files when mandatory file locking is employed (Section 55.4).

From Advanced Programming in the UNIX Environment 2nd Ed:

We also said that system calls related to disk I/O are not considered slow, even though the read or write of a disk file can block the caller temporarily.

From http://www.remlab.net/op/nonblock.shtml:

Regular files are always readable and they are also always writeable. This is clearly stated in the relevant POSIX specifications. I cannot stress this enough. Putting a regular file in non-blocking has ABSOLUTELY no effects other than changing one bit in the file flags.

Reading from a regular file might take a long time. For instance, if it is located on a busy disk, the I/O scheduler might take so much time that the user will notice the application is frozen.

Nevertheless, non-blocking mode will not work. It simply will not work. Checking a file for readability or writeability always succeeds immediately. If the system needs time to perform the I/O operation, it will put the task in non-interruptible sleep from the read or write system call.


IO streams are typically buffered and every so often these buffers are flushed (to the OS, then to the disk), however you have little control over when and at what frequency (strictly speaking you can, however you don't want to...) It's when the flushing happens that you see your outliers.

"non-blocking" and "asynchronous" are not words I would use with standard streams. If you want to reduce these delays, consider memory mapped file writes - boost has a nice portable wrapper for memory mapped files.


fprintf and << do not write with overlapped IO by default, and I'm confident that there is no option to turn it on. Overlapped IO is not portable. You must use WriteFile with initialized overlapped structure as last parameter.


Based on your description, it looks like you are writing too little data too many times. If the IO activity is high, it can cause delays on the buffered file I/O. As mentioned before you can use memory based files or use block writes. The idea is to reduce the number of writes by clubbing multiple writes in one shot, so instead of doing 10 write of 500 bytes, you do one write of 5k each. In most of the OS the typical page size (and write size) is around 4k (not sure about windows). So try some open source package or writing a wrapper that will reduce the # of writes.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜