开发者

Performance of 32-bit integers in a 64-bit environment (C++)

We've started compiling both 32- and 64-bit versions of some of our applications. One of the guys on my project is encouraging us to switch all of our 32-bit integers to their 64-bit equivalents, even if the values are guaranteed开发者_高级运维 to fit in a 32-bit space. For example, I've got a value that is guaranteed to never exceed 10,000 which I'm storing in an unsigned int. His recommendation is to switch this to a size_t so that it expands to 64 bits in a 64-bit environment, even though we'll never need the extra space. He says that using 64-bit variables will speed up the application regardless of the values stored in each variable. Is he right? It's turning out to be a lot of work, and I'm not anxious to put in the effort if it doesn't actually make a difference.

We're using Microsoft Visual C++ 2008. I'm kinda hoping for a more general, platform-independent answer though.

So what do you think? Are we right to spend time changing our data types for performance reasons rather than range reasons?


I think you have a huge case of pre-mature optimization staring you in the face. Never make micro changes like this in your application until a profiler has definitively told you that it is a source of significant performance problems.

Otherwise you'll spend a lot of time fixing non-problems.


Well if the 32-bit operations are taking place in 64-bit registers, some extra instructions do need to get emitted to handle things like setting carry/overflow flags, etc. I'd be surprised if you realised any noticeable performance improvement, though. I can all but guarantee there are far worse bottlenecks in your program.


Firstly, using 64-bit ints instead of 32-bit ints in 64-bit environment will not generally speed-up anything. Depending on the context and on the compiler abilities, this might actually slow things down. Normally, you should prefer using int/unsigned int types for storing integral values in your program, switching to other types only when really necessary. In the end, the definitive answer to the question can only be obtained by an actual experiment, since it depends on too many variables.

Secondly, anyone who advises using size_t for that purpose (as a generic unsigned type) should be immediately denied access to code and sent to take some C/C++ classes before they are allowed to touch the code again.


Don't do it. It just means that the cpu will not be able to hold as much data in cache and the penalty for going to main memory is a lot higher than most other things.


The idea that using a 64bit integer vs. a 32bit integer will speed things up is a myth. The more important thing in your code is to use the appropriate types. For instance, when referring to the size of an array or data structure, use a size_t because that's what a size_t is supposed to represent. If you're storing some piece of data, then use an int and not a size_t because that's what an int is meant to describe.

Don't just change everything to size_t because it will "automatically become 64bit" this will result in probably no improvement whatsoever. It will cause larger memory overhead which will probably cause the application to slow down due to cache misses because of the larger memory space. It will also quite possibly cause unexpected bugs.


I'd guess (and it's just a guess) that you'll likely see no improvement in performance and may see a slight drop if the amount of memory increase causes some memory access to lose locality of reference and push things out of the cache more often than before.

As JaredPar says, if there's not a factual reason to do this or unless you need the increased range of the larger ints, it's probably a waste of time.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜