Error checking fprintf when printing to stderr
According to the docs, fprintf can fail and will return a negative number on failure. There are clearly many situations where it would be useful to check this value.
However, I usually use fprintf to print error messages to stderr. My code will usually look something like this:
rc = foo();
if(rc) {
fprintf(stderr, "An error occured\n");
//Sometimes stuff will need to be cleaned up here
return 1;
}
In these cases, is it still possible for fprintf to fail? If so, is there anything that can be done to display the error message somehow or is there is a more reliable alternative to fprintf?
If not, is there any need to che开发者_Python百科ck fprintf when it is used in this way?
The C standard says that the file streams stdin
, stdout
, and stderr
shall be connected somewhere, but they don't specify where, of course.
(C11 §7.21.3 Files ¶7:
At program startup, three text streams are predefined and need not be opened explicitly -- standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
It is perfectly feasible to run a program with the standard streams redirected:
some_program_of_yours >/dev/null 2>&1 </dev/null
Your writes will succeed - but the information won't go anywhere. A more brutal way of running your program is:
some_program_of_yours >&- 2>&- </dev/null
This time, it has been run without open file streams for stdout
and stderr
— in contravention of the the standard. It is still reading from /dev/null
in the example, which means it doesn't get any useful data input from stdin
.
Many a program doesn't bother to check that the standard I/O channels are open. Many a program doesn't bother to check that the error message was successfully written. Devising a suitable fallback as outline by Tim Post and whitey04 isn't always worth the effort. If you run the ls
command with its outputs suppressed, it will simply do what it can and exits with a non-zero status:
$ ls; echo $?
gls
0
$ ls >&- 2>&-; echo $?
2
$
(Tested RHEL Linux.) There really isn't a need for it to do more. On the other hand, if your program is supposed to run in the background and write to a log file, it probably won't write much to stderr
, unless it fails to open the log file (or spots an error on the log file).
Note that if you fall back on syslog(3)
(or POSIX), you have no way of knowing whether your calls were 'successful' or not; the syslog
functions all return no status information. You just have to assume that they were successful. It is your last resort, therefore.
Typically, you'd employ some kind of logging system that could (try) to handle this for you, or you'll need to duplicate that logic in every area of your code that prints to standard error and exits.
You have some options:
- If fprintf fails, try syslog.
- If both fail, try creating a 'crash.{pid}.log' file that contains information that you'd want in a bug report. Check for the existence of these files when you start up, as they can tell your program that it crashed previously.
- Let net connected users check a configuration option that allows your program to submit an error report.
Incidentally, open()
read()
and write()
are good friends to have when the fprintf family of functions aren't working.
As whitey04 says, sometimes you just have to give up and do your best to not melt down with fireworks going off. But do try to isolate that kind of logic into a small library.
For instance:
best_effort_logger(LOG_CRIT, "Heap corruption likely, bailing out!");
Is much cleaner than a series of if
else
else if
every place things could possibly go wrong.
You could put the error on stdout or somewhere else... At some point you just have to give error reporting a best effort and then give up.
The key is that your app "gracefully" handles it (e.g. the OS doesn't have to kill it for being bad and it tells you why it exited [if it can]).
Yes, of course fprintf
to stderr
can fail. For instance stderr
could be an ordinary file and the disk could run out of space, or it could be a pipe that gets closed by the reader, etc.
Whether you should check an operation for failure depends largely on whether you could achieve better program behavior by checking. In your case, the only conceivable things you could do on failure to print the error message are try to print another one (which will almost surely also fail) or exit the program (which is probably worse than failing to report an error, but perhaps not always).
Some programs that really want to log error messages will set up an alternate stack at program start-up to reserve some amount of memory (see sigaltstack(2) that can be used by a signal handler (usually SIGSEGV
) to report errors. Depending upon the importance of logging your error, you could investigate using alternate stacks to pre-allocate some chunk of memory. It might not be worth it :) but sometimes you'd give anything for some hint of what happened.
精彩评论