开发者

Handling overflow in processor timing using GNU clock()

I need to incrementally time some C code on a 32-bit Linux system. I am using the GNU clock() function for this. Here is the skeleton of my code with the relevant clock bits:

clock_t start, end;
double elapsed;

/* Init things */

start = clock();
while(terminationNotMet) { 

    /* Do some work. */

    end = clock();
    elapsed = ((double) (end - start)) / CLOCKS_PER_SEC;
    fprintf(fp,"%lf %d", elapsed, someResults);
}

Now the problem is that the clock_t is really just a long int and this code runs for quite awhile. The elapsed number of clock ticks returned by clock() eventually overflows and the resulting data is useless. 开发者_Go百科Any thoughts on some workarounds or another method of timing CPU time? (Using wall clock time is not an option as these jobs are running niced on multiuser systems)


Unfortunately this is a bug in glibc: clock_t is signed long rather than unsigned long, so it's impossible to use due to overflow. It should work to cast the values to unsigned long before subtracting them, but this is an ugly hack.

A better solution would be to use the modern clock_gettime function with the CLOCK_CPUTIME clock. This will give you nanosecond-resolution results instead of the poor resolution clock gives.


How about updating the start time on each iteration:

elapsed = 0.0;
start = clock();
while(terminationNotMet) { 

    /* Do some work. */

    end = clock();
    elapsed += ((double) (end - start)) / CLOCKS_PER_SEC;
    fprintf(fp,"%lf %d", elapsed, someResults);
    start = end;
}
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜