开发者

Programming language with arbitrary but fixed precision integers?

Are there any programming languages out there that support n-bit integer types for arbitrary n as primitives? That is, is there a language where I could write something to the effect of

int[137 bits] n = 0;

Note tha开发者_如何学编程t this isn't the same as asking if there's a language with arbitrary-precision integers in them. I'm looking specifically for something where I can have fixed-precision integers for any particular fixed precision I'd like.


Verilog would express that as reg [136:0] n. Hardware description languages (HDL) all give you similar capabilities. You can use Verilog as a scripting language as well, but that's not really its design center and won't give you the performance you could get with a BigInt style integer in a regular programming language.


Ada compilers allow you to specify the size of integer declarations as lower and upper bounds. I don't know if they will accept arbitrary integers for such, or if they will generate code for such. It isn't exactly hard code to generate: multi-precision adds are easy enough with multipled fixed-width "add carry" instructions, and multiplies/divides slow enough so a subroutine call to a library doesn't affect performance noticeably.


Ada allows you declare a type as a range of Integer which you could use to implement your requirements for small values of "arbitrary".

COBOL supports arbitrary precision base-10 numbers.

It would be a straight-forward task to create a wrapper class for Java BigInteger (or similar) that did what you wanted; e.g. truncated the result of an arithmetic operation or threw a custom "overflow" exception.


If i would design one this would be the semantic:

fixed a, b;
a = 123 45;
b = 4 12;

print(a + b)

Would output 127.57 depending on the locale selected. (it could very well be 127,57) It would not complicate parsing very much, one space between numbers is easily parsable and distinguishable enough to prevent parse confusion. It would also allow setting a precision right from the (constant) definition. Fixed numbers combined (+-/*) with different precision would raise an out of range exception.

A type checked language could go for a SQL analog, something like

fixed(23,4) a

To define a variable 'a' with 23 digit precision and 4 decimals. (let's not make SQLs confusing mistake here)

In a world where money is everything and (binary) floating point is by definition unacurate and unintuitive I wonder why this is not a standard.


I know of none. There are some that support bigIntegers as primitives, but you explicitly didn't ask that. I suspect a problem would be that most platforms aren't bit addressable. I imagine you could write a class that could implement n-byte integers, but then it wouldn't be primitive. Curious to know your application...

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜