Should I keep a file open or should I open and close often?
Is it customary in such a case to open the file only once?
#!/usr/bin/env perl
use warnings;
use 5.012;
use autodie;
my $file = 'my_file';
open my $fh, '>>', $file;
say $fh "Begin";
close $fh;
$SIG{INT} = sub {
open my $fh, '>>', $file;
say $fh "End";
close $fh;
exit
};
my $result;
while ( 1 ) {
开发者_StackOverflow社区 $result++;
# ...
# ...
# ...
open my $fh, '>>', $file;
say $fh $result;
close $fh;
sleep 3;
}
Short answer: Almost always, you should open/close only once. Details below.
The decision of whether to do that depends on 4 things:
Are there other processes that may need to write to the file?
If so, you may need to lock the file, and the good behavior for a process designed for concurrent use is to release the locked shared resource as fast as possible so others can get the lock.
Are there MANY files you need to open?
If so, you might run out of file handles with too many open files, so you need to close.
How much tolerance you have for losing data in the file if the program crashes.
If you need to save the data from buffer into the file, you need to flush it. This CAN be done by frequently closing, although a better solution is either frequent flushing or autoflush on the file handle being turned on.
Do you care greatly about not being able to close the file after running out of disk space?
If so, the more often you close/reopen the file, the less data you will lose due to filesystem being full so whatever you wrote since last
open
is gone.
In any other scenario, only open/close once (well, plus maybe extra close in __DIE__
handler and END{}
block (and majority of time, you WILL likely be in other scenarios).
That is because opening/closing the file just wastes system resources for no reason whatsoever AND makes your code longer. To be more specific, file open and close are expensive operations, requiring both system calls (that may force jump to kernel from userland), and extra disk IO which is VERY expensive resource wise. To verify that, run some system utilization measurment utility on your OS, and run a Perl script that does nothing except open/close 10000 different file names 100 times each.
Please note (regarding scenarios #3/#4) that if you care greatly about not losing any data, you should not be using file IO in the first place - use a database or a messaging system with delivery guarantees.
It's customary, for normal programming, to open each file and keep the open file handle for as long as your processing will still have use for it.
Exceptions to this would be if file handling were limited to specific portions of the code (initialization and shutdown, for example) or to specific and relatively rare events (a signal handler tied to re-reading a configuration or updating statistical or debugging dumps).
In the example you're showing the extra open and close operations are utterly superfluous (and likely to be expensive in terms of performance and system overhead).
精彩评论