开发者

Normalization of speed for testing on different multicore processors

I want to calculate run time of some simple c programs on different multi-core processors. But as we know with advancement of technology new processors are incorporating more methods for faster computation like clo开发者_JS百科ck speed etc. How can I normalize such speed changes(to filter out the effect of other advance methods in processor except multi-core) as I only want to get results on the basis of number of cores of processor.


Under Linux, you can boot with the kernel command line parameter maxcpus=N to limit the machine to only N CPUs. See Documentation/kernel-parameters.txt in the kernel source for details.

Most BIOS environments also have the ability to turn off hyperthreading; depending upon your benchmarks, HT may speed up or slow down your tests; being in control of HT would be ideal.


Decide on a known set of reference hardware, run some sort of repeatable reference benchmark against this, and get a good known value to compare to. Then you can run this benchmark against other systems to figure out how to scale the values you get from your target benchmark runs.

The closer your reference benchmark is to your actual application, the more accurate the results of your scaling will be. You could have a single deterministic run (single code path, maybe average of multiple executions) of your application used as your reference benchmark.


If I understand you correctly, you are trying to find a measurement approach that allows to separate the effect of scaling the number of cores from advances of single processor improvements. I am afraid that is not easily possible. E.g. if you compare a multi-core system to one single core of that system you have a non-linear correlation. Because there are shared resources as e.g. the memory bus. If you use only one core of multi-core system it can use the complete memory bandwidth while it has to share in the multi-core case. Similar arguments apply to many shared resources: as there are caches, buses, io capabillities, ALUs, etc.


Your issue is with the auto scaling of core frequency based on the amount of active cores at any given time. For instance, AMD phenom 6-core chips operate at 3.4GHz (or somewhat similar) and if your application creates more than 3 threads it goes down to 2.8Ghz (or similar). Intel on the other hand uses a bunch of heuristics to determine the right frequency for any given time. However, you can always turn these settings off by going to BIOS and then the results will be comparable only differing based on clock frequency. Usually, people measure giga flops to have comparable results.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜