#define vs. enums for addressing peripherals
I have to program peripheral registers in an ARM9-based microcontroller.
For instance, for the USART, I store the relevant memory addresses in an enum
:
enum USART
{
US_BASE = (int) 0xFFFC4000,
US_BRGR = US_BASE + 0x16,
//...
};
Then, I use pointers in a function to initialize the registers:
void init_usart (void)
{
vuint* pBRGR = (vuint*) US_BRGR;
*pBRGR = 0x030C;
//...
}
But my开发者_如何转开发 teacher says I'd better use #define
s, such as:
#define US_BASE (0xFFFC4000)
#define US_BRGR (US_BASE + 0x16)
#define pBRGR ((vuint*) US_BRGR)
void init_usart (void)
{
*pBRGR = 0x030C;
}
Like so, he says, you don't have the overhead of allocating pointers in the stack.
Personally, I don't like #define
s much, nor other preprocessor directives.
So the question is, in this particular case, are #define
s really worth using instead of enum
s and stack-allocated pointers?
Related question: Want to configure a particular peripheral register in ARM9 based chip
The approach I've always preferred is to first define a struct reflecting the peripherals register layout
typedef volatile unsigned int reg32; // or other appropriate 32-bit integer type
typedef struct USART
{
reg32 pad1;
reg32 pad2;
reg32 pad3;
reg32 pad4;
reg32 brgr;
// any other registers
} USART;
USART *p_usart0 = (USART * const) 0xFFFC4000;
Then in code I can just use
p_usart0->brgr = 0x030C;
This approach is much cleaner when you have multiple instances of the same sort of peripheral:
USART *p_usart1 = (USART * const) 0xFFFC5000;
USART *p_usart2 = (USART * const) 0xFFFC6000;
User sbass provided a link to an excellent column by Dan Saks that gives much more detail on this technique, and points out its advantages over other approaches.
If you're lucky enough to be using C++, then you can add methods for all the common operations on the peripheral and nicely encapsulate the devices peculiarities.
I am afraid that enum
are a dead end for such a task. The standard defines enum
constants to be of type int
, so in general they are not compatible with pointers.
One day on an architecture with 32bit int
and 64bit pointers you might have a constant that doesn't fit into an int
. It is not well defined what will happen.
On the other hand the argument that enum
would allocate something on the stack is not valid. They are compile time constants and have nothing to do with the function stack or no more than any constants that you specify through macros.
Dan Saks has written a number of columns on this for Embedded Systems Programming. Here's one of his latest ones. He discusses C, C++, enums, defines, structs, classes, etc. and why you might one over another. Definitely worth reading and always good advice.
In my experience, one big reason to use #define
for this kind of thing is that it's more of the standard idiom used in the embedded community.
Using enums instead of #define
will generate questions/comments from instructors (and in the future, colleagues), even when using other techniques might have other advantages (like not stomping on the global identifier namespace).
I personally like using enums for numeric constants, but sometimes you need to do what is customary for what and where you're working.
However, performance shouldn't be an issue.
The answer is always do whatever the teacher wants and pass the class then on your own question everything and find out if their reasons were valid and form your own opinions. You cant win against the school, not worth it.
In this case it is easy to compile to assembler or disassemble to see the difference if any between the enum and define.
I would recommend the define over enum, have had compiler discomfort with enums. I highly discourage using pointers the way you are using them, I have seen every compiler fail to accurately generate the desired instructions, it is rare but when it happens you will wonder how your last decades of coding ever worked. Pointing structs or anything else is considerably worse. I often get flamed for this, and expect to this time around. Too many miles around the block, fixed too much broken code with these problems to ignore the root cause.
I wouldn't necessarily say that either way is better. It is just personal preference. As for your professor's argument, it is really a moot point. Allocating variables on the stack is one instruction, no matter how many there are, usually in the form sub esp, 10h
. So if you have one local or 20, it is still one instruction to allocate the space for all of them.
I would say that the one advantage of the #include is that if for some reason down the road you wanted to change how that pointer is accessed, you just need to change it in one location.
I would tend towards using an enum, for potential future compatibility with C++ code. I say this because at my job, we have a lot of C header files shared between projects, some of which use C code and some of which use C++. For those using C++, we'd often like to wrap the definitions in a namespace, to prevent symbol masking, but you can't assign a #define to a namespace.
精彩评论