Simple timer to measure seconds an operation took to complete
I run my own script to dump databases into files on a nightly basis.
I wanted to count time (in seconds) it takes to dump each database, so I was trying to write some functions to help me achieve it, but I'm running into problems.I am no expert in scripting in bash, so if I'm doing it plain wrong, just say so and ideally suggest alternative, please.
Here's the script:
#!/bin/bash
declare -i time_start
function get_timestamp {
declare -i time_curr=`date -j -f "%a %b %d %T %Z %Y" "\`date\`" "+%s"`
echo "get_timestamp:" $time_curr
return $time_curr
}
function timer_start {
get_timestamp
time_start=$?
echo "timer_start:" $time_start
}
function timer_stop {
get_timestamp
declare -i time_curr=$?
echo "timer_stop:" $time_curr
declare -i time_diff=$time_curr-$time_start
return $time_diff
}
timer_start
sleep 3
timer_stop
echo $?
The code should really be quite self-explanatory. echo
commands are only for debugging.
$ bash timer.sh
get_timestamp: 1285945972
timer_start: 1285945972
get_timestamp: 1285945975
timer_stop: 1285945975
3
Now this is not the case unfortunately. What I get is:
开发者_如何学编程$ bash timer.sh
get_timestamp: 1285945972
timer_start: 116
get_timestamp: 1285945975
timer_stop: 119
3
As you can see, the value that local var time_curr
gets from the command is a valid timestamp, but returning this value causes it to be changed to an integer between 0 and 255.
Can someone please explain to me why this is happening?
PS. This obviously is just my timer test script without any other logic.
UPDATE Just to be perfectly clear, I want this to be part of a bash script very similar to this one, where I want to measure each loop cycle.
Unless of course I can do it with time
, then please suggest a solution.
You don't need to do all this. Just run time <yourscript>
in the shell.
$?
is used to hold the exit status of a command and can only hold a value between 0 and 255. If you pass an exit code outside this range (say, in a C program calling exit(-1)
), the shell will still receive a value in that range and set $?
accordingly.
As a workaround, you could just set a different value in your bash function:
function get_timestamp {
declare -i time_curr=`date -j -f "%a %b %d %T %Z %Y" "\`date\`" "+%s"`
echo "get_timestamp:" $time_curr
get_timestamp_return_value=$time_curr
}
function timer_start {
get_timestamp
#time_start=$?
time_start=$get_timestamp_return_value
echo "timer_start:" $time_start
}
...
I believe you should be able to use the existing "time" function.
After Update to the question: This was the bit of script from your link which was doing a for loop.
# dump each database in turn
for db in $databases; do
echo $db
$MYSQLDUMP --force --opt --user=$USER --password=$PASSWORD
--databases $db > "$OUTPUTDIR/$db.bak"
done
You could extract the inner portion of the loop into a new script (call it dump_one_db.sh
)
and do this inside the loop:
# dump each database in turn
for db in $databases; do
time dump_one_db.sh $db
done
Make sure to write the output of the time against the db name into some file.
This is happening because return codes need to be between 0-255. You can't return an arbitrary number. If you continue to refuse to use the builtin time
function and roll your own, change your functions to echo their stamp and use a process expansion ($()
) to grab the value.
精彩评论