Calling a function for a period of time
I want to make a call - either a function call or doing some condition for a PERIOD of time ... typically 10 - 20 seconds.开发者_如何学Go
I would get some user input for the amount of time and do that ...
What is the proper function to use on Linux/Unix systems?
gettimeofday seems to be the way to go ... or perhaps time_t time(time_t *t) ... seems simple. What is preferred?
So is it something like this you want? This will repeatedly call myfunc() for the next 20 seconds. So could do 1 call (if myfunc takes at least 20 seconds to run) or hundreds of calls (of myfunc() takes a few milliseconds to complete):
#include <time.h>
void myfunc()
{
/* do something */
}
int main()
{
time_t start = time(NULL);
time_t now = time(NULL);
while ((now - start) <= 20) {
myfunc();
now = time(NULL);
}
}
It's probably worth asking what you're ultimately trying to achieve. If this is for profiling (e.g., what's the average amount of time function f takes to execute), then you might want to look at other solutions - e.g., using the built-in profiling that gcc gives you (when building code with the "-pg" option), and analyzing with gprof.
This can be done like so
#include <ctime>
/*
your function here
*/
int main()
{
double TimeToRunInSecs = ...;
clock_t c = clock();
while(double(clock()-c)/CLOCKS_PER_SEC < TimeToRunInSecs)
{
myFunc();
}
}
the standard clock() function returns number of SOMETHING from the process start. In one second there are CLOCK_PER_SEC SOMETHINGs :)
HTH
I could do a
time_t current_time = time(0);
and measure off of that ... but is there a preferred way ... mainly this is a best practices kind of question ....
x
Couple of things..
If you want to ensure that the function takes a Time X to complete, irrespective of how long the actual code within the function took, do something like this (highly pseudo code)
class Delay
{
public:
Delay(long long delay) : _delay(delay) // in microseconds
{
::gettimeofday(_start, NULL); // grab the start time...
}
~Delay()
{
struct timeval end;
::gettimeofday(end, NULL); // grab the end time
long long ts = _start.tv_sec * 1000000 + _start.tv_usec;
long long tse = end.tv_sec * 1000000 + end.tv_usec;
long long diff = tse - ts;
if (diff < _delay)
{
// need to sleep for the difference...
// do this using select;
// construct a struct timeval (same as required for gettimeofday)
fd_set rfds;
struct timeval tv;
int retval;
FD_ZERO(&rfds);
diff = _delay - diff; // calculate the time to sleep
tv.tv_sec = diff / 1000000;
tv.tv_usec = diff % 1000000;
retval = select(0, &rfds, NULL, NULL, &tv);
// should only get here when this times out...
}
}
private:
struct timeval _start;
};
Then define an instance of this Delay class at the top of your function to delay - should do the trick... (this code is untested and could have bugs in it, I just typed it to give you an idea..)
精彩评论