does it makes sense to use int instead of char or nvarchar for a discriminator column if I'm using it as FK also
I have something like this:
create table account
(
开发者_Python百科 id int identity(1,1) primary key,
usertype char(1) check(usertype in ('a', 'b')) not null,
unique(id, usertype)
)
create table auser
(
id int primary key,
usertype char(1) check(usertype = 'a') not null,
foreign key (id, usertype) references account(id, usertype)
)
create table buser
(
... same just with b
)
the question is: if I'm going to use int instead of char(1), does it going to work faster/better ?
it doesn't matter on most modern databases. int is fine.
char as well.
(when the database fetch data from a table, it's not in byte size....)
why would you need IDENTITY columns: "auser.id" and "buser.id" that foreign key back to the "account.id" identity column?? seems hard to make sure everything could ever be in sync? When you insert into account you get an ID (say 1) and type "a", when you insert into "auser" you get an id (1) and FK to "account" how would you insert into "buser" (and get id 1) and fk back to account with 1,b??
Onto the real question. Size UserType to how many values you will have, if you will only have a few CHAR(1) is best, your index will take up less memory. if you will have more than a char(1) can hold, go tiny int (0-255, 1 byte), if you need more, go smallint (32k, 2 byte), if you need more go int (2,147,483,647, 4 byte)
A char(1) is 1 byte, whereas an int is 4 bytes. Even a small int is 2 bytes.
精彩评论