开发者

What is the platform independent algorithm that returns a measurable value to test Moore's Law? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 11 years ago.

Just because of curiosity...

Is there a platform independent algorithm that produces a comparable value; so that I can

implement the algorith on different machines that were introduced to the market bi-yearly

and see how does it fit with Moore's Law by checking the returned values of the algorithm

in those machine开发者_开发问答s?


Most of the transistors that are put onto your CPU by Intel and AMD are put there with the purpose of speeding it up one way or another, so a possible proxy for "how many transistors are on there?" is, "how fast is it?". Often when people talk about Moore's law in relation to a CPU it's performance that they're talking about, even though that's not what Moore said.

Benchmarking a CPU is notoriously arbitrary, though. What weightings do you give to your various speed tests? Suppose that next year, Intel invents 20 new SIMD instructions, and adds corresponding silicon to their chips to implement them. Unless your code uses those instructions, there's no way it's going to notice that they're there, so they won't affect your results and you won't report an increase in your performance/transistor index. Since they were invented after you wrote your code, you can't execute them explicitly, so the only way they're going to be used is if an up-to-date compiler, with options to target the new version of the CPU, finds some code in your benchmark that it thinks will benefit from the new instructions. Not very reliable, you simply can't detect new transistors if you can't find a way to use them.

Performance of a single core of a CPU on simple benchmarks has in any case hit something of a roadblock in the last few years. CPU manufacturers are adding cores, and adding special-purpose instructions and silicon, so programs have more resources to draw on if they know how to use them, but boring old arithmetic isn't getting much faster. It's hard to know for what special purposes CPU manufacturers will be adding transistors in 5 or 10 years time, but if you can do that then you could possibly write benchmarks now that will tell you when they've done it.

I don't know much about GPUs, but if you can somehow detect the number of GPU cores on your machine (counting parallel shaders and whatnot), that might actually be the best proxy for raw number of transistors. I guess the number of transistors in each core does go up over time too, but the number of cores on modern graphics cards is rocketing, so actually that might account for the bulk of the new transistors related to processing. Whether that will still be the case in 5 or 10 years, again, who knows.

Another big transistor count is RAM - presumably for a given type of RAM, the number of transistors is pretty much proportional to capacity, and that at least is easily measured using OS-specific functions.

If you stick a SSD in a machine, I bet you pile on the transistor count too. Is that the sort of thing you're interested in, though? Really Moore's law was about single ICs, not the total contents of a beige (well, white or silver these days) box at a given price point.


Well algorithm could be really simple, like calculating flops(floating point operations per second). Just get system time, make 1kk floating point operations get time again and get the difference(or use LINPACK Benchmarks wich is used to rate supercomputers). However implementing this in platform independent way would be tricky.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜