Type coercion in c: unsigned int to float
I'm communicating serially between a host pc and an embedded processor. On the embedded side, I need to parse character strings for floating point and integer data. What I am currently doing is something along these lines:
inline float32* fp_unpack(float32* dest, volatile char* str开发者_如何学Python) {
Uint32 temp = (Uint32)str[3]<<24;
temp |= (Uint32)str[2]<<16;
temp |= (Uint32)str[1]<<8;
temp |= (Uint32)str[0];
temp = (float32)temp;
*dest = (float32)temp;
return dest;
}
Where str has four characters, each representing a byte of the float. The bytes in string are ordered little endian.
As an example, I'm trying to extract the number 100.0 from str. I've verified the contents of string are:
s[0]: 0x00, s[1]: 0x00, s[2]: 0x20, s[3]: 0x41,
which is the 32 bit floating point representation of 100.0. Furthermore, I've verified that the function successfully sets temp to 0x41200000. However, dest ends up being 0x4e824000. I know the problem arises from the line: *dest = (float32)temp, which I hoped would simply copy the bits from temp to dest, with a typecast to make the compiler happy.
However, I've realized that this won't be the case, since the operation: float x = (float)4/3 actually converts 4 to 4.0, ie changing the bits.
How do I coerce the bits in temp into dest?
Thanks in advance
edit: Note that 0x4120000 as an integer is 1092616192, which, as a float, is 0x4e82400
You need to cast the pointers. Casting the values simply converts the int to float. Try:
*dest = *((float32*)&temp);
The portable way that does not invoke undefined behavior due to aliasing rules violations:
float f;
uint32_t i;
memcpy(&f, &i, sizeof f);
Here is one more solution:
union test {
float f;
unsigned int i;
} x;
float flt = 100.0;
unsigned int uint;
x.f = flt;
uint = x.i;
Now unit
has the bit pattern as it was in f
.
Isn't Hex (IEEE754 ) representation of float 100.0 -->0x42c80000
精彩评论