开发者

Does the database (maximum) field length affect performance?

At my company, we have a legacy database with various tables and therefore many, ma开发者_运维百科ny fields.

A lot of the fields seem to have large limits (ex: NVARCHAR(MAX)) that are never reached.

Does arbitrarily making the fields their maximum width or 2 to 3 times larger than what is normally inputted negatively affect performance?

How should one balance performance with field lengths? Is there a balance?


There's two parts to this question:

Does using NVARCHAR over VARCHAR hurt performance? Yes, storing data in unicode fields doubles the storage requirements. Your data stored in those fields is 2x the size it needs to be (until SQL Server 2008 R2 came out, which includes unicode compression. Your table scans will take twice as long and only half as much data can be stored in memory in the buffer cache.

Does using MAX hurt performance? Not directly, but when you use VARCHAR(MAX), NVARCHAR(MAX), and those kinds of fields, and if you need to index the table, you won't be able to rebuild those indexes online in SQL Server 2005/2008/R2. (Denali brings some improvements around tables with MAX fields so some indexes can be rebuilt online.)


Yes, the query optimizer can guess how many rows fit in a page, if you have a lot of varchar fields that are larger than necessary, SQL Server can internally guess the wrong number of rows.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜