Can multiple processes append to a file using fopen without any concurrency problems?
I have a process opening a file in append mode. In this case it is a log file. Sample code:
int main(int argc, char **argv) {
FILE *f;
f = fopen("log.txt", "a");
fprintf(f, "log entry line");
fclose(f);
}
Two questions:
- If I have multiple processes appending to the same file, will each log line appear distinctly or ca开发者_JAVA百科n they be interlaced as the processes context switch?
- Will this write block if lots of processes require access to the file, therefore causing concurrency problems?
I am considering either doing this in its simplest incarnation or using zeromq to pump log entries over pipes to a log collector.
I did consider syslog but I don't really want any platform dependencies on the software.
The default platform is Linux for this btw.
I don't know about fopen
and fprintf
but you could open
the file using O_APPEND
. Then each write
will go at the end of the file without a hitch (without getting mixed with another write).
Actually looking in the standard:
The file descriptor associated with the opened stream shall be allocated and opened as if by a call to open() with the following flags:
a or ab O_WRONLY|O_CREAT|O_APPEND
So I guess it's safe to fprintf
from multiple processes as long as the file has been opened with a
.
The standard (for open/write, not fopen/fwrite) states that
If the O_APPEND flag of the file status flags is set, the file offset shall be set to the end of the file prior to each write and no intervening file modification operation shall occur between changing the file offset and the write operation.
For fprintf()
to be used, you have to disable buffering on the file.
You'll certainly have platform dependencies since Windows can't handle multiple processes appending to the same file.
Regarding synchronization problems, I think that line-buffered output /should/ save you most of the time, i.e. more than 99.99% of short log lines should be intact according to my short shell-based test, but not every time. Explicit semantics are definitely preferable, and since you won't be able to write this hack system-independently anyway, I'd recommend a syslog approach.
When your processes will be going to write something like:
"Here's process #1"
"Here's process #2"
you will probably get something like:
"Hehere's process #2re's process #1"
You will need to synchronize them.
EDIT to answer your questions explicitly:
- If I have multiple processes appending to the same file, will each log line appear distinctly or can they be interlaced as the processes context switch?
Yes, each log line will appear intact because according to msdn/vs2010:
"This function [that is, fwrite( )] locks the calling thread and is therefore thread-safe. For a non-locking version, see _fwrite_nolock."
The same is implied on the GNU manpage:
"— Function: size_t fwrite (const void *data, size_t size, size_t count, FILE *stream)
This function writes up to count objects of size size from the array data, to the stream stream. The return value is normally count, if the call succeeds. Any other value indicates some sort of error, such as running out of space.
— Function: size_t fwrite_unlocked (const void *data, size_t size, size_t count, FILE *stream)
The fwrite_unlocked function is equivalent to the fwrite function except that it does not implicitly lock the stream.
This function [i.e., fwrite_unlocked( )] is a GNU extension. "
- Will this write block if lots of processes require access to the file, therefore causing concurrency problems?
Yes, by implication from question 1.
Unless you do some sort of synchronization the log lines may overlap. So to answer number two, that depends on how you implement the locking and logging code. If you just lock, write to file and unlock, that may cause problems if you have lots of processes trying to access the file at the same time.
精彩评论