开发者

How best to deal with warning c4305 when type could change?

I'm using both Ogre and NxOgre, which both have a Real typedef that is either float or double depending on a compiler flag. This has resulted in most of our compiler warnings now being:

warning C4305: 'argument' : truncation from 'double' to 'Ogre::Real'

When initialising variables with 0.1 for example. Normally I would use 0.1f but then if you change the compiler flag to double precision then you would get the reverse warning. I guess it's probably best to开发者_高级运维 pick one and stick with it but I'd like to write these in a way that would work for either configuration if possible.

One fix would be to use #pragma warning (disable : 4305) in files where it occurs, I don't know if there are any other more complex problems that can be hidden by not having this warning. I understand I would push and pop these in header files too so that they don't end up spreading across code.

Another is to create some macro based on the accuracy compiler flag like:

#if OGRE_DOUBLE_PRECISION
    #define INIT_REAL(x) (x)
#else
    #define INIT_REAL(x) static_cast<float>( x )
#endif

which would require changing all the variable initialisation done so far but at least it would be future proof.

Any preferences or something I haven't thought of?


The simple solution would be to just add a cast:

static_cast<Ogre::Real>(0.1);

or you could write a function to do the conversion for you (similar to your macro, but avoiding all the yucky problems macros bring:

template <typename T>
inline Ogre::Real real(T val) { return static_cast<Ogre::Real>(val); } 

Then you can just call real(0.1) and get the value as an Ogre::Real.


Clearly, if the type you use is platform specific, so should the literals be. The same issues appear when using TCHARs, CStrings, LPCTSTRs etc..., only worse: char cannot be converted to wchar_t.

So my preference would definitely go to the initialization macro. Working away warnings is dangerous.

You can even specify your macro to append the "float" literal tag:

//#define DOUBLEPREC

#ifndef DOUBLEPREC
typedef float REAL;
#define INIT_REAL(r) (r##f)
#else
typedef double REAL;
#define INIT_REAL(r) r
#endif

REAL r = INIT_REAL(.1);
double d = INIT_REAL(.1);
float f = INIT_REAL(.1);


Always initialise using floating point literals. Every floating point value has a double value that represents the exact same decimal number (though this may not be the best double precision approximation to the decimal value the float is intended to represent).

Visual Studio seems to perform the expansion at compile time, even in debug builds, though the results aren't exactly inspiring: passing 1.1f into a function that takes a double passes 1.1000000238418579, but passing 1.1 in passes 1.1000000000000001. (Both values according to the watch window.)

(No such problems with numbers that have an exact representation, of course, e.g. 1.25f.)

Whether this is a big deal or not, I couldn't say, but if your game runs OK with floats then it is already tolerating a similar level of inaccuracy.


As to me, I would stick to one of the types and forget about this warning. Anyway, double/single precision has minimal impact on fps (in my projects it was < 3%).

Also to mention (as far as I know), DirectX and OpenGL work in single precision.

If you really want to get rid of this warning the proper way, you could use the #if stuff, but not the way you did. The better approach is to make something like

#if OGRE_DOUBLE_PRECISION == 1
    // Note that 
    typedef double Real;
#else
    typedef float Real;
#endif

You could also write your code using Ogre::Real as the type (or typedefing it so it would be comfortable for you).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜