开发者

Advantage and disadvantages of #define vs. constants? [duplicate]

This question already has answers here: "static const" vs "#define" vs "enum" (17 answers) 开发者_高级运维 Closed 5 years ago.

Can someone point out the advantages and disadvantages of using #define versus constants? Most of my work is done in C and Objective-C.


As 0A0D mentioned, there are #defines, enums, and const variables. It's worth noting that const-qualified variables are not considered to be compile-time constants in C and therefore can't be used in some circumstances (e.g. when declaring the size of an array).

enum constants are compile-time constants, however. For integral values, IMO it's usually better to prefer enums over const variables over #define.


Actually there are three ways of defining such constants,

  • defines

  • enums
  • const variables

In C, everything is an int unless otherwise specified. I prefer enums when I have a number of related integer constants. Enums are clearly preferable when you don't care what the values are. But even when you do need to specify the values for all the constants, I like the mental grouping of an enum. Code documents itself better when you have the type, e.g.

Error MyFunc();

clearly returns one of a particular set of error codes, whereas

int MyFunc()

might return one of the #define'd list for Unix errno, or maybe something else, or maybe those plus some idiosyncratic values -- who knows? If you have more than one set of return codes, which set does this function use?

The more specific enum type name helps the tags facility in your editor, greps, debugging, and so on.

A strict lint may give you some warnings about using enums as integers, for example if you add or or them, or pass an enum to an int.

A const object is different from either an enum or a #define, particularly in C. In ANSI C, a const int takes up space just as a regular int; most compilers will also generate pointer references to this address rather than inlining the value. As a result, I rarely use const int's in C. (C++ has slightly different semantics, and so the choices are different there.)

Every compiler I've ever used has the option to store enums in the smallest space possible. Usually it's even the default option. To force wider enums when using such an option, I usually throw in an extra unsigned value:

typedef enum
{
    MyEnumA,
    MyEnumB,

    MyEnumForce16 = 7fff
} MyEnum;

The use of an enumeration constant (enum) has many advantages over using the traditional symbolic constant style of #define. These advantages include a lower maintenance requirement, improved program readability, and better debugging capability.

1) The first advantage is that enumerated constants are generated automatically by the compiler. Conversely, symbolic constants must be manually assigned values by the programmer.

For instance, if you had an enumerated constant type for error codes that could occur in your program, your enum definition could look something like this:

enum Error_Code
{
OUT_OF_MEMORY,
INSUFFICIENT_DISK_SPACE,
LOGIC_ERROR,
FILE_NOT_FOUND
};

In the preceding example, OUT_OF_MEMORY is automatically assigned the value of 0 (zero) by the compiler because it appears first in the definition. The compiler then continues to automatically assign numbers to the enumerated constants, making INSUFFICIENT_DISK_SPACE equal to 1, LOGIC_ERROR equal to 2, and FILE_NOT_FOUND equal to 3, so on. If you were to approach the same example by using symbolic constants, your code would look something like this:

#define OUT_OF_MEMORY 0
#define INSUFFICIENT_DISK_SPACE 1
#define LOGIC_ERROR 2
#define FILE_NOT_FOUND 3

Each of the two methods arrives at the same result: four constants assigned numeric values to represent error codes. Consider the maintenance required, however, if you were to add two constants to represent the error codes DRIVE_NOT_READY and CORRUPT_FILE. Using the enumeration constant method, you simply would put these two constants anywhere in the enum definition. The compiler would generate two unique values for these constants. Using the symbolic constant method, you would have to manually assign two new numbers to these constants. Additionally, you would want to ensure that the numbers you assign to these constants are unique.

2) Another advantage of using the enumeration constant method is that your programs are more readable and thus can be understood better by others who might have to update your program later.

3) A third advantage to using enumeration constants is that some symbolic debuggers can print the value of an enumeration constant. Conversely, most symbolic debuggers cannot print the value of a symbolic constant. This can be an enormous help in debugging your program, because if your program is stopped at a line that uses an enum, you can simply inspect that constant and instantly know its value. On the other hand, because most debuggers cannot print #define values, you would most likely have to search for that value by manually looking it up in a header file.

The #define statement is a pre-compiler directive. Technically, any line that begins with a # is something for the pre-compiler to act on. The pre-compiler will replace all instances of the defined token with its definition. So doing this:

#define DELAY 40
for (i=0;i<DELAY;i++) {
    for (j=0;j<DELAY;j++) {
        asm NOP;
    }
}

is exactly the same as this (as far as the compiler is concerned):

for (i=0;i<40;i++) {
    for (j=0;j<40;j++) {
        asm NOP;
    }
}

When the compiler generates machine code, it will see the number 40 and use the immediate addressing mode in order to compare with the accumulator. The number 40 will be stored in the code as many times as you are referencing it. In this case it is twice. Here is the assembly generated by CodeWarrior Ver5:

7:    char i,j;
    8:    for (i=0;i<DELAY;i++) {
  0002 95       [2]             TSX   
  0003 7f       [2]             CLR   ,X
  0004          [5]     L4:     
    9:      for (j=0;j<DELAY;j++) {
  0004 6f01     [3]             CLR   1,X
  0006          [5]     L6:     
   10:        asm NOP;
  0006 9d       [1]             NOP   
  0007 6c01     [4]             INC   1,X
  0009 e601     [3]             LDA   1,X
  000b a128     [2]             CMP   #40  ;<---- notice opcode a1 and immediate constant 40, which is $28 in hexadecimal
  000d 25f7     [3]             BCS   L6
  000f 7c       [3]             INC   ,X
  0010 f6       [2]             LDA   ,X
  0011 a128     [2]             CMP   #40  ;<---- and here it is again.
  0013 25ef     [3]             BCS   L4
   11:      }
   12:    }
   13:  }


Constants allow you to specify a datatype, which is (usually) an advantage. Macros are much more flexible, and therefore can get you into much more trouble if you're not careful.

Best practice is to use constants as much as possible, and use #define only when you really need a macro, not just a named literal value.


Constants have the advantage of being typed, so using them incorrectly can be discovered at compile time. It may not matter to you but constants take up space in memory while #defines do not (since they are replaced before actual compilation happens).


Constants follow type safety measures, #defines are substituted outright. Also as GMan said, #define's don't respect scope.


Explanation for #define:A #define is either an immediate value or a macro.

Explanation for constant:A constant is a value of any type which can never change.

You can delcare a pointer to a const, but not to a #define, although a #define could be a pointer for eg: #define ADDRESS ((int *)0x0012)

So why we should use constant is as follows:

  • they obey the language's scoping rules
  • you can see them in the debugger
  • you can take their address if you need to
  • you can pass them by const-reference if you need to
  • they don't create new "keywords" in your program.

In short, const identifiers act like they're part of the language because they are part of the language.

Within a module, a C compiler could optimize a const as if it were a #define, if there are no pointers declared to the constant. In CPU terms, the const would become an "immediate" value. Other alternatives is that a const variable could be placed in the code area as opposed to the data area since it doesn't change. On some machines, declaring a ponter to a constant could cause an exception if you tried to modify the constant via the pointer.

There are cases where #define is needed, but you should generally avoid it when you have the choice. You should evaluate whether to use const or #define based on business value: time, money, risk.


Const is an object you can take its address, for example. Also it is type-safe, i.e. compiler knows what the constant's type is. Above does not apply to #define.


  1. const produces an lvalue, meaning its address can be taken. #define doesn't.
  2. #define can cause unintentional macro expansions, which can be a PITA to debug.
  3. As mentioned by others, #define doesn't have a type associated with it.

In general, I'd avoid the preprocessor like the plague for anything I didn't have to use it for, mostly because of the possibility of unintentional expansion and because the ALL_CAPS convention for mitigating this is unbelievably ugly.


1) #define's can be considered as tunable parameters that are independent of the datatypes, whereas constant's allow us to mention the datatype.

2) #define's replaces any code that follows that in the main program where ever they are referred to. In addition to this we can even have macro function performing a specific task, that could be called by passing the parameters alone. These are obviously not possible in the case of constant's.

So, they are used based on the relevance.


the benefit of using define is once you define the variable for exp: #define NUMBER 30, all the code in main will use that code with value 30. If you change the 30 to 40 it will directly change all value in main which using this variable (NUMBER).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜