If characters can hold what integers can then why is there a need to use integers?
Why do we at all use integers in C?
#include<stdio.h>
int main()
{
char c=10;
printf("%d",c);
return 0;
}
Is same as:
#include<stdio.h>
int 开发者_StackOverflow社区main()
{
int c=10;
printf("%d",c);
return 0;
}
Technically all datatypes are represented with 0's and 1's. So, if they are all the same in the back end, why do we need different types?
Well, a type is a combination of data, and the operations you can perform on the data.
We have ints
for representing numbers. They have operations like +
for computing the sum of two numbers, or -
to compute the difference.
When you think of a character, in the usual sense, it represents one letter or symbol in a human readable format. Being able to sum 'A' + 'h'
doesn't make sense. (Even though c lets you do it.)
So, we have different types in different languages to make programming easier. They essentially encapsulate data and the functions/operations that are legal to perform on them.
Wikipedia has a good article on Type Systems.
Because char
holds numbers only from -127 to 127
A character can keep only 8 bits, while integer can have 16, 32, or even 64 bits (long long int).
Try this:
#include <stdio.h>
int main()
{
int c = 300;
printf("%d", c);
return 0;
}
#include <stdio.h>
int main()
{
char c = 300;
printf("%d", c);
return 0;
}
The data types char
, short
, int
, long
and long long
hold (possibly) different size integers that can take values up to a certain limit. char
holds an 8-bit number (which is technically neither signed
nor unsigned
, but will actually be one or the other). Therefore the range is only 256 (-127 to 128 or 0 to 255).
Good practice is to avoid char
, short
, int
, long
and long long
and use int8_t
, int16_t
, int32_t
, uint8_t
etc, or even better: int_fast8_t
, int_least8_t
etc.
Broadly speaking, a char
is meant to be the smallest unit of sensible data storage on a machine, but an int
is meant to be the "best" size for normal computation (eg. the size of a register). The size of any data type can be expressed as a number of char
s, but not necessarily as a number of int
s. For example, on Microchip's PIC16, a char
is eight bits, an int
is 16 bits, and a short long
is 24 bits. (short long
would have to be the dumbest type qualifier I have ever encountered.)
Note that a char
is not necessarily 8 bits, but usually is. Corollary: any time someone claims that it's 8 bits, someone will chime in and name a machine where it isn't.
From a machine architecture point of view, a char is a value that can be represented in 8 bits (what ever happened to non-8 bit architectures?).
The number of bits in an int is not fixed; I believe it is defined as the "natural" value for a specific machine, i.e. the number of bits it is easiest/fastest for that machine to manipulate.
As has been mentioned, all values in computers are stored as sequences of binary bits. How those bits are interpreted varies. They can be interpreted as binary numbers or as a code representing something else, such as a set of alphabetic characters, or as many other possibilities.
When C was first designed the assumption was that 256 codes were sufficient to represent all the characters in an alphabet. (Actually, this was probably not the assumption, but it was good enough at the time and the designers were trying to keep the language simple and match the then-current machine architectures). Hence an 8-bit value (256 possibilities) was considered sufficient to hold an alphabetic character code and the char data type was defined as a convenience
Disclaimer: all that is written above is my opinion or guess. The designers of C are the only ones who can truly answer this question.
A simpler, but misleading, answer is that you can't store integer value 257 in a char but you can in an int. natural
Because of
#include <stdio.h>
int main(int argc, char **argv)
{
char c = 42424242;
printf("%d", c); /* Oops. */
return(0);
}
char cant hold what integers can. Not everything.
At least not in the way you assign a value to a char
.
Do some experimenting with sizeof
to see if there is a difference between char
and int
.
If you really wish to use char
instead of int
, you probably should consider char[]
instead, and store the ASCII, base 10, character representation of the number. :-)
精彩评论