开发者

Is there a reason to use Base2 lengths for columns in SQL Server? [duplicate]

This question already has answers here: Closed 12 years ago.

Possible Duplicates:

varchar Fields - Is a Power of Two More Efficient?

Nvarchar or varchar what is better use multiply of 2 or rounded full numbe开发者_运维百科rs??

Out of sheer habit I define the sizes of columns that I use in SQL Server to be Base2 sizes. For example, here's a table that I'm working on:

  1. ID int
  2. FirstName nvarchar(64)
  3. LastName nvarchar(64)
  4. Col4 varchar(16)
  5. Col5 nvarchar(32)
  6. Col6 nvarchar(128)
  7. etc...

I have no idea where this habit came from, and I'm not sure it even makes sense. Would the following table definition be less efficient in some way than the one above?

  1. ID int
  2. FirstName nvarchar(50)
  3. LastName nvarchar(50)
  4. Col4 varchar(10)
  5. Col5 nvarchar(30)
  6. Col6 nvarchar(100)
  7. etc...

I guess my main question is: are there any legitimate reasons for using Base2 column lengths?


Making columns larger than they need to be can be actively harmful to your database design. From BOL:

A table can contain a maximum of 8,060 bytes per row. In SQL Server 2008, this restriction is relaxed for tables that contain varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns....Surpassing the 8,060-byte row-size limit might affect performance because SQL Server still maintains a limit of 8 KB per page. When a combination of varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns exceeds this limit, the SQL Server Database Engine moves the record column with the largest width to another page in the ROW_OVERFLOW_DATA allocation unit, while maintaining a 24-byte pointer on the original page. Moving large records to another page occurs dynamically as records are lengthened based on update operations. Update operations that shorten records may cause records to be moved back to the original page in the IN_ROW_DATA allocation unit. Also, querying and performing other select operations, such as sorts or joins on large records that contain row-overflow data slows processing time, because these records are processed synchronously instead of asynchronously.

I've found if you give them the extra size sooner or later they will use it. Further, if you set something as varchar (64) and you only really need 10 characters max, you are making it more likely someone will use the field for other than it's intended purpose and you will find that you get bad data in those fields (like a phone number field containing notes about the office secretary to contact to pick a not so random example).

However at least this design is far better than making everything nvarchar (max).


No reason to do this, especially with (n)varchar data, where the storage size is the actual length of the data + 2 bytes.


No, it's just a programmer's habit to think and act in powers of 2 - there's definitely no technical reason from SQL Server to do this - no increase in speed or performance or anything like that.


Doubtful. First of all, the exact lengths of columns matters mostly for your own data schema reasons. Secondly, if the lengths do enter into efficiencies, the total length of all the columns is likely the most important criterion, and even then, there will be bookkeeping overhead that will mean that a nice round number is not likely to be the best answer.

So you might find advice about limiting your row size to a particular amount, so that the entire row fits into a page, or something along those lines. This is to reduce the number of disk I/O's per record. But the individual column sizes don't matter, it would be the total that does.


It's not specifically a SQL SErver question... I do the same thing in Oracle and MySQL. There's no particular reason other than the fact that perhaps I feel more comfortable using base2 sizes.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜