开发者

The basic input form for floating-point numbers?

I just have discovered the fundamental difference between two input forms for floating-point numbers:

In[8]:= 1.5*^-334355//Hold//FullForm
1.5*10^-334355//Hold//FullForm
Out[8]//FullForm= Hold[1.5000000000000000000000000000000001`15.954589770191005*^-334355]
Out[9]//开发者_开发技巧FullForm= Hold[Times[1.5`,Power[10,-334355]]]

These two forms differ very much in memory and time consumption:

In[7]:= start = MaxMemoryUsed[];
1.5*^-33432242 // Timing
start = MaxMemoryUsed[] - start
1.5*10^-33432242 // Timing
MaxMemoryUsed[] - start

Out[8]= {1.67401*10^-16, 1.500000000000000*10^-33432242}

Out[9]= 0

Out[10]= {7.741, 1.500000000000000*10^-33432242}

Out[11]= 34274192

But I cannot find out where the form *^ is documented. Is it a real basic input form for floating-point numbers? How is about numbers in other bases?

And why the second form is so much expensive?


Regarding the time and memory consumption - these are the consequences of evaluation, have nothing to do with different forms. You use integer arithmetic for the power of 10 when 10 is present explicitly, thus the time/memory inefficiency. When we use machine precision from the start, the effect disappears:

In[1]:= MaxMemoryUsed[]
1.5*^-33432242 // Timing
MaxMemoryUsed[]
1.5*10.^-33432242 // Timing
MaxMemoryUsed[]

Out[1]= 17417696

Out[2]= {0., 1.500000000000000*10^-33432242}

Out[3]= 17417696

Out[4]= {0., 1.500000000043239*10^-33432242}

Out[5]= 17417696

HTH

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜