开发者

Reasoning behind having to specify L for long, F,D for float, double

A few related questions here.

As per the title, why is it a requirement if we are specifying the variable type as long or float, double? Doesn't the compiler evaluate the variable's type at compile time?

Java considers all integra开发者_如何学Pythonl literals as int - is this to lessen the blow of inadvertent memory waste? And all floating-point literals as double - to ensure highest precision?


When you have a constant there are subtle differences between value which look the same, but are not. Additionally, since autoboxing was introduce, you get a very different result as less.

Consider what you get if you multiply 0.1 by 0.1 as a float or as a double and convert to a float.

float a = (float) (0.1 * 0.1);
float b = 0.1f * 0.1f;
System.out.println("a= "+new BigDecimal(a));
System.out.println("b= "+new BigDecimal(b));
System.out.println("a == b is " + (a == b));

prints

a= 0.00999999977648258209228515625
b= 0.010000000707805156707763671875
a == b is false

Now compare what you get if you use either float or int to perform a calculation.

float a = 33333333f - 11111111f;
float b = 33333333 - 11111111;
System.out.println("a= "+new BigDecimal(a));
System.out.println("b= "+new BigDecimal(b));
System.out.println("a == b is " + (a == b));

prints

a= 22222220
b= 22222222
a == b is false

Compare int and long

long a = 33333333 * 11111111; // overflows
long b = 33333333L * 11111111L;
System.out.println("a= "+new BigDecimal(a));
System.out.println("b= "+new BigDecimal(b));
System.out.println("a == b is " + (a == b));

prints

a= -1846840301
b= 370370362962963
a == b is false

compare double with long

double a = 333333333333333333L  / 333333333L;
double b = 333333333333333333D  / 333333333D;
System.out.println("a= "+new BigDecimal(a));
System.out.println("b= "+new BigDecimal(b));
System.out.println("a == b is " + (a == b));

prints

a= 1000000001
b= 1000000000.99999988079071044921875
a == b is false

In summary its possible to construct a situation where using int, long, double or float will produce a different result compared with using another type.


This becomes important when you do more than a simple assignment. If you take

float x = 0.1 * 3.0;

it makes a difference if the computer does the multiplication in double precision and then converts to single precision or if it converts the numbers to single precision first and then multiplies.

edit: Not in this explicit case 0.1 and 3.0, but if your numbers become complex enough, you will run into precision issues that show differences between float and double. Making it explicit to the compiler if they are supposed to be doubles or float avoids ambiguity.


I believe it's simply to avoid confusion. How will the compiler know that 1.5 is meant to be a float or double if there's no default for it to fall back on? As for evaluating variables, please note that variables != literals.

Edit 1
Regarding some comments, I believe that there are times when you wouldn't want the compiler to automatically translate the literal on the right to variable type on the left.

Edit 2
And of course there's

public void foo(int bar) {
  //...
}

public void foo(long bar) {
  //...
}

   //... some other method
   foo(20);  // which foo is called?
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜