开发者

Floating point vs integer performance

When developing a programming language, is distinguishing between ints and floats important? I noticed in the case of R that while they do allow for strict integer types, one mainly deals with numeric types which can be floats or ints. Are there performance benefits?

Edit

I'm also interested in learning more about when the time period was (if there was one) where one would notice a difference in performan开发者_运维问答ce by choosing a float instead of an integer.


If you're talking about performance: For most purposes, there is no performance difference. You can probably still measure one in purely number-crunching code compiled to machine code, and for slightly less math-intense code on hardware that doesn't have a dedicated FPU (i.e. mostly embedded stuff). But for Python (and many other languages), any difference in the hardware's performance is dwarfed (by many many orders of magnitude) by the interpretation and boxing overhead. When number are treated as pointers to 16-byte structures with addition being a dynamically-dispatched method call in response to an interpreted opcode, it doesn't matter if the actual processing takes one nanosecond or hundred.

Semantically, the difference between integers and (approximations of) reals is still and always will be a mathematical fact rather than a necessity that follows from the state of the art of computer engineering. For example, floats (in general, not implicit conversions from floats that are exactly integers) will never make sense as indices.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜