开发者

Computing algorithm running time in C

I am usi开发者_运维技巧ng the time.h lib in c to find the time taken to run an algorithm. The code structure is somewhat as follows :-

#include <time.h>

int main()
{
  time_t start,end,diff;

  start = clock();
    //ALGORITHM COMPUTATIONS
  end = clock();
  diff = end - start;
  printf("%d",diff);
  return 0;
}

The values for start and end are always zero. Is it that the clock() function does't work? Please help. Thanks in advance.


Not that it doesn't work. In fact, it does. But it is not the right way to measure time as the clock () function returns an approximation of processor time used by the program. I am not sure about other platforms, but on Linux you should use clock_gettime () with CLOCK_MONOTONIC flag - that will give you the real wall time elapsed. Also, you can read TSC, but be aware that it won't work if you have a multi-processor system and your process is not pinned to a particular core. If you want to analyze and optimize your algorithm, I'd recommend you use some performance measurement tools. I've been using Intel's vTune for a while and am quite happy. It will show you not only what part uses the most cycles, but highlight memory problems, possible parallelism issues etc. You may be very surprised with the results. For example, most of the CPU cycles might be spent waiting for memory bus. Hope it helps!

UPDATE: Actually, if you run later versions of Linux, it might provide CLOCK_MONOTONIC_RAW, which is a hardware-based clock that is not a subject to NTP adjustments. Here is a small piece of code you can use:

  • stopwatch.hpp
  • stopwatch.cpp


Note that clock() returns the execution time in clock ticks, as opposed to wall clock time. Divide a difference of two clock_t values by CLOCKS_PER_SEC to convert the difference to seconds. The actual value of CLOCKS_PER_SEC is a quality-of-implementation issue. If it is low (say, 50), your process would have to run for 20ms to cause a nonzero return value from clock(). Make sure your code runs long enough to see clock() increasing.


I usually do it this way:

clock_t start = clock();
clock_t end;

//algo

end = clock();
printf("%f", (double)(end - start));


Consider the code below:

#include <stdio.h>
#include <time.h>

int main()
{
    clock_t t1, t2;
    t1 = t2 = clock();

    // loop until t2 gets a different value
    while(t1 == t2)
        t2 = clock();

    // print resolution of clock()
    printf("%f ms\n", (double)(t2 - t1) / CLOCKS_PER_SEC * 1000);

    return 0;
}

Output:

$ ./a.out 
10.000000 ms

Might be that your algorithm runs for a shorter amount of time than that. Use gettimeofday for higher resolution timer.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜