process killed -- delete output file?
I have a bash script that runs on our shared web host. It does a dump of our mysql database and zips up the output file. Sometimes the mysqldump process gets killed, which leaves an incomplete sql file that still gets zipped. How do I get my script to 'notice' the killing and then delete the output file if the killing occurred?
Edit: here's the line from my script
nice -19 mysqldump -uuser -ppassword -h database.hostname.com --skip-opt --all --complete-insert --add-drop-table database_name > ~/file/system/path/filename.sql
And here's what I get on occasion from my buddy Cron:
/home/user/backup_script.bash: line 17: 12611 Killed nice -19 mysqldump -uuser -ppassword -h database.hostname.com --skip-opt --all --complete-insert --add-drop-table database_name > ~/f开发者_C百科ile/system/path/filename.sql
So when this happens, I want to just delete the filename.sql, becuase it will have some number of inserts, but not all. I know in bash there is someway to capture the output state of a command, true or false, and then if it's false, do something.
If mysqldump gets killed it will have an exit code != 0:
if ! mysqldump ...;then
rm ...
fi
You could use ps
or pgrep
to see if the process is still running based on its name. Or you could use lsof
on the SQL file to see if a process is accessing the file. However, if the process completes normally, that "open" connection will no longer be there.
精彩评论