CGFloat argument type incorrectly traces out with zero value...why?
Really puzzled why the following is happening and hope someone can help.
// here xPos correctly traces out, always changing
//CODE EXCERPT
CGF开发者_高级运维loat xPos=currentTouchPosition.x - lastTouch.x;
NSLog(@"touchesMoved and xPos= %f", xPos);
if (startTouchPosition.x < currentTouchPosition.x)
[self adjustTimer:xPos];
else
//here newInterval always traces as 0...why????
-(void) adjustTimer:(CGFloat)newInterval
{
NSLog(@" adjustTimer and newInterval = %f",newInterval);
I just ran into this and I think I figured out why this happens. As mentioned, using double
in the method call can solve this depending on how your code is compiled, but there is a reason why this happens.
This stack page explains the differences CGFloat and a float. Effectively CGFloat is a typedef "wrapper" around either a float or a double. Whether it actually is a float or a double depends on the header where CGFloat is defined that your code compiled against. According to the aforementioned page, you can actually view this header (CGBase.h
) by right clicking on CGFloat
in the code editor and going to "Jump to Definition". You'll see something like this:
/* Definition of `CGFLOAT_TYPE', `CGFLOAT_IS_DOUBLE', `CGFLOAT_MIN', and
`CGFLOAT_MAX'. */
#if defined(__LP64__) && __LP64__
# define CGFLOAT_TYPE double
# define CGFLOAT_IS_DOUBLE 1
# define CGFLOAT_MIN DBL_MIN
# define CGFLOAT_MAX DBL_MAX
#else
# define CGFLOAT_TYPE float
# define CGFLOAT_IS_DOUBLE 0
# define CGFLOAT_MIN FLT_MIN
# define CGFLOAT_MAX FLT_MAX
#endif
/* Definition of the `CGFloat' type and `CGFLOAT_DEFINED'. */
typedef CGFLOAT_TYPE CGFloat;
#define CGFLOAT_DEFINED 1
Notice the line:
# define CGFLOAT_IS_DOUBLE 1
This constant nested in those tags define whether a CGFloat is a double or float, which according to the #if defined(__LP64__) && __LP64__
seems to depend on whether the architecture is 64 bit or not. Which when you think about it, is a good idea for increasing precision depending on your machine's architecture.
All goes well until you mix code compiled with this flag on, with other code compiled with this flag off (Such as a binary library with other code). Effectively mixing 64 bit compiled code with 32 bit compiled code. This causes messages getting passed doubles to methods expecting a float. Since this is all happening at the byte level, no error is thrown and the bytes are just truncated, taking off all of the bits that actually described your value and leaving bits with nothing in them, or 0.
Beware that this can happen easily when mixing code compiled for OS X with code compiled for iOS, but can even happen with code compiled just for iOS, when you compile a binary for both the Simulator and Device.
Ok resolved by changing the method argument type to 'double' from CGFloat, but confused as to why this works.
精彩评论