开发者

How are floating point numbers stored in memory?

I've read that they're stored in the form of mantissa and exponent

I've read this document but I could not understand anyt开发者_Go百科hing.


To understand how they are stored, you must first understand what they are and what kind of values they are intended to handle.

Unlike integers, a floating-point value is intended to represent extremely small values as well as extremely large. For normal 32-bit floating-point values, this corresponds to values in the range from 1.175494351 * 10^-38 to 3.40282347 * 10^+38.

Clearly, using only 32 bits, it's not possible to store every digit in such numbers.

When it comes to the representation, you can see all normal floating-point numbers as a value in the range 1.0 to (almost) 2.0, scaled with a power of two. So:

  • 1.0 is simply 1.0 * 2^0,
  • 2.0 is 1.0 * 2^1, and
  • -5.0 is -1.25 * 2^2.

So, what is needed to encode this, as efficiently as possible? What do we really need?

  • The sign of the expression.
  • The exponent
  • The value in the range 1.0 to (almost) 2.0. This is known as the "mantissa" or the significand.

This is encoded as follows, according to the IEEE-754 floating-point standard.

  • The sign is a single bit.
  • The exponent is stored as an unsigned integer, for 32-bits floating-point values, this field is 8 bits. 1 represents the smallest exponent and "all ones - 1" the largest. (0 and "all ones" are used to encode special values, see below.) A value in the middle (127, in the 32-bit case) represents zero, this is also known as the bias.
  • When looking at the mantissa (the value between 1.0 and (almost) 2.0), one sees that all possible values start with a "1" (both in the decimal and binary representation). This means that it's no point in storing it. The rest of the binary digits are stored in an integer field, in the 32-bit case this field is 23 bits.

In addition to the normal floating-point values, there are a number of special values:

  • Zero is encoded with both exponent and mantissa as zero. The sign bit is used to represent "plus zero" and "minus zero". A minus zero is useful when the result of an operation is extremely small, but it's still important to know from which direction the operation came from.
  • plus and minus infinity -- represented using an "all ones" exponent and a zero mantissa field.
  • Not a Number (NaN) -- represented using an "all ones" exponent and a non-zero mantissa.
  • Denormalized numbers -- numbers smaller than the smallest normal number. Represented using a zero exponent field and a non-zero mantissa. The special thing with these numbers is that the precision (i.e. the number of digits a value can contain) will drop the smaller the value becomes, simply because there is not room for them in the mantissa.

Finally, the following is a handful of concrete examples (all values are in hex):

  • 1.0 : 3f800000
  • -1234.0 : c49a4000
  • 100000000000000000000000.0: 65a96816


In layman's terms, it's essentially scientific notation in binary. The formal standard (with details) is IEEE 754.


  typedef struct {
      unsigned int mantissa_low:32;     
      unsigned int mantissa_high:20;
      unsigned int exponent:11;        
      unsigned int sign:1;
    } tDoubleStruct;

double a = 1.2;
tDoubleStruct* b = reinterpret_cast<tDoubleStruct*>(&a);

Is an example how memory is set up if compiler uses IEEE 754 double precision which is the default for a C double on little endian systems (e.g. Intel x86).

Here it is in C based binary form and better read wikipedia about double precision to understand it.


There are a number of different floating-point formats. Most of them share a few common characteristics: a sign bit, some bits dedicated to storing an exponent, and some bits dedicated to storing the significand (also called the mantissa).

The IEEE floating-point standard attempts to define a single format (or rather set of formats of a few sizes) that can be implemented on a variety of systems. It also defines the available operations and their semantics. It's caught on quite well, and most systems you're likely to encounter probably use IEEE floating-point. But other formats are still in use, as well as not-quite-complete IEEE implementations. The C standard provides optional support for IEEE, but doesn't mandate it.


The mantissa represents the most significant bits of the number.

The exponent represents how many shifts are to be performed on the mantissa in order to get the actual value of the number.

Encoding specifies how are represented sign of mantissa and sign of exponent (basically whether shifting to the left or to the right).

The document you refer to specifies IEEE encoding, the most widely used.


I have found the article you referenced quite illegible (and I DO know a little how IEEE floats work). I suggest you try with the Wiki version of the explanation. It's quite clear and has various examples:

http://en.wikipedia.org/wiki/Single_precision and http://en.wikipedia.org/wiki/Double_precision


It is implementation defined, although IEEE-754 is the most common by far.

To be sure that IEEE-754 is used:

  • in C, use #ifdef __STDC_IEC_559__
  • in C++, use the std::numeric_limits<float>::is_iec559 constants

I've written some guides on IEEE-754 at:

  • In Java, what does NaN mean?
  • What is a subnormal floating point number?
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜