开发者

Is there a standardised way to get type sizes in bytes in C++ Compilers?

I was wondering if there is some standardized way of getting type sizes in memory at the pre-processor stage - so in macro form, sizeof() does not cut it.

If their isn't a standardized method are their conventional methods that most IDE's use anyway?

Are there any other methods that anyone can think of to get such data?

I suppose I could do a two stage build kind of thing, get the output of a test program and feed it back into the IDE, but that's not really any easier than #defining them in myself.

Thoughts?

EDIT:

I just want to be able to swap code around with

#ifdef / #endif

Was it naive of me to think that an IDE or underlying compiler might define that information under some macro? Sure the pre-processor doesn't get information on any actual machine code generation functions, but the IDE and the Compiler do, and they call the pre-processor and declare stuff to it in advance.

开发者_如何转开发

EDIT FURTHER

What I imagined as a conceivable concept was this:

The C++ Committee has a standard that says for every type (perhaps only those native to C++) the compiler has to give to the IDE a header file, included by default that declares the size in memory that ever native type uses, like so:

#define CHAR_SIZE 8
#define INT_SIZE 32
#define SHORT_INT_SIZE 16
#define FLOAT_SIZE 32
// etc

Is there a flaw in this process somewhere?

EDIT EVEN FURTHER

In order to get across the multi-platform build stage problem, perhaps this standard could mandate that a simple program like the one shown by lacqui would be required to compile and run be run by default, this way, whatever that gets type sizes will be the same machine that compiles the code in the second or 'normal' build stage.

Apologies:

I've been using 'Variable' instead of 'Type'


Depending on your build environment, you may be able to write a utility program that generates a header that is included by other files:

int main(void) {
    out = make_header_file();  // defined by you
    fprintf(out, "#ifndef VARTYPES_H\n#define VARTYPES_H\n");

    size_t intsize = sizeof(int);
    if (intsize == 4)
        fprintf(out, "#define INTSIZE_32\n");
    else if (intsize == 8)
        fprintf(out, "#define INTSIZE_64\n");
    // .....
    else fprintf(out, "$define INTSIZE_UNKNOWN\n");
}

Of course, edit it as appropriate. Then include "vartypes.h" everywhere you need these definitions.

EDIT: Alternatively:

fprintf(out, "#define INTSIZE_%d\n", (sizeof(int) / 8));
fprintf(out, "#define INTSIZE %d\n", (sizeof(int) / 8));

Note the lack of underscore in the second one - the first creates INTSIZE_32 which can be used in #ifdef. The second creates INTSIZE, which can be used, for example char bits[INTSIZE];

WARNING: This will only work with an 8-bit char. Most modern home and server computers will follow this pattern; however, some computers may use different sizes of char


Sorry, this information isn't available at the preprocessor stage. To compute the size of a variable you have to do just about all the work of parsing and abstract evaluation - not quite code generation, but you have to be able to evaluate constant-expressions and substitute template parameters, for instance. And you have to know considerably more about the code generation target than the preprocessor usually does.

The two-stage build thing is what most people do in practice, I think. Some IDEs have an entire compiler built into them as a library, which lets them do things more efficiently.


Why do you need this anyway?

The cstdint include provides typedefs and #defines that describe all of the standard integer types, including typedefs for exact-width int types and #defines for the full value range for them.


No, it's not possible. Just for example, it's entirely possible to run the preprocessor on one machine, and do the compilation entirely separately on a completely different machine with (potentially) different sizes for (at least some) types.

For a concrete example, consider that the normal distribution of SQLite is what they call an "amalgamation" -- a single already-preprocessed source code file that you actually compile on your computer.


You want to generate different code based on the sizes of some type? maybe you can do this with template specializations:

#include <iostream>

template <int Tsize>
struct dosomething{
  void doit() { std::cout << "generic version" << std::endl; }
};

template <>
void dosomething<sizeof(int)>::doit()
{ std::cout << "int version" << std::endl; }

template <>
void dosomething<sizeof(char)>::doit()
{ std::cout << "char version" << std::endl; }


int main(int argc, char** argv)
{
  typedef int foo;
  dosomething<sizeof(foo)> myfoo;
  myfoo.doit();

}


How would that work? The size isn't known at the preprocessing stage. At that point, you only have the source code. The only way to find the size of a type is to compile its definition.

You might as well ask for a way to get the result of running a program at the compilation stage. The answer is "you can't, you have to run the program to get its output". Just like you need to compile the program in order to get the output from the compiler.

What are you trying to do?

Regarding your edit, it still seems confused.

Such a header could conceivably exist for built-in types, but never for variables. A macro could perhaps be written to replace known type names with a hardcoded number, but it wouldn't know what to do if you gave it a variable name.

Once again, what are you trying to do? What is the problem you're trying to solve? There may be a sane solution to it if you give us a bit more context.


For common build environments, many frameworks have this set up manually. For instance,

http://www.aoc.nrao.edu/php/tjuerges/ALMA/ACE-5.5.2/html/ace/Basic__Types_8h-source.html

defines things like ACE_SIZEOF_CHAR. Another library described in a book I bought called POSH does this too, in a very includable way: http://www.hookatooka.com/wpc/


The term "standardized" is the problem. There's not standard way of doing it, but it's not very difficult to set some pre-processor symbols using a configuration utility of some sort. A real simple one would be compile and run a small program that checks sizes with sizeof and then outputs an include file with some symbols set.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜