Difference between uint and unsigned int?
开发者_JAVA百科Is there any difference between uint
and unsigned int
?
I'm looking in this site, but all questions refer to C# or C++. I'd like an answer about the C language.
If it is relevant, note that I'm using GCC under Linux.
uint
isn't a standard type - unsigned int
is.
Some systems may define uint as a typedef.
typedef unsigned int uint;
For these systems they are same. But uint is not a standard type, so every system may not support it and thus it is not portable.
I am extending a bit answers by Erik, Teoman Soygul and taskinoor
uint is not a standard.
Hence using your own shorthand like this is discouraged:
typedef unsigned int uint;
If you look for platform specificity instead (e.g. you need to specify the number of bits your int occupy), including stdint.h:
#include <stdint.h>
will expose the following standard categories of integers:
Integer types having certain exact widths
Integer types having at least certain specified widths
Fastest integer types having at least certain specified widths
Integer types wide enough to hold pointers to objects
Integer types having greatest width
For instance,
Exact-width integer types
The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two's-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
The typedef name uint N _t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.
defines
int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
All of the answers here fail to mention the real reason for uint
.
It's obviously a typedef
of unsigned int
, but that doesn't explain its usefulness.
The real question is,
Why would someone want to typedef a fundamental type to an abbreviated version?
To save on typing?
No, they did it out of necessity.
Consider the C language; a language that does not have templates.
How would you go about stamping out your own vector that can hold any type?
You could do something with void pointers,
but a closer emulation of templates would have you resorting to macros.
So you would define your template vector:
#define define_vector(type) \
typedef struct vector_##type { \
impl \
};
Declare your types:
define_vector(int)
define_vector(float)
define_vector(unsigned int)
And upon generation, realize that the types ought to be a single token:
typedef struct vector_int { impl };
typedef struct vector_float { impl };
typedef struct vector_unsigned int { impl };
The unsigned int
is a built in (standard) type so if you want your project to be cross-platform, always use unsigned int
as it is guarantied to be supported by all compilers (hence being the standard).
The uint
is a possible and proper abbreviation for unsigned int
. It is better readable. But: It is not C standard. You can define and use it (as all other defines) to your own responsibiity.
But unfortunately some system headers define uint
too. I have found in a sys/types.h
from a currently compiler (ARM):
# ifndef _POSIX_SOURCE
//....
typedef unsigned short ushort; /* System V compatibility */
typedef unsigned int uint; /* System V compatibility */
typedef unsigned long ulong; /* System V compatibility */
# endif /*!_POSIX_SOURCE */
It seems to be a concession for familiary sources programmed as Unix System V standard. To switch off this undesired behaviour (because I want to
#define uint unsigned int
by myself, I have set firstly
#define _POSIX_SOURCE
A system's header must not define things which is not standard. But there are many things which are defined there, unfortunately.
See also on my web page https://www.vishia.org/emc/html/Base/int_pack_endian.html#truean-uint-problem-admissibleness-of-system-definitions resp. https://www.vishia.org/emc.
精彩评论