开发者

Recommended Hardware for Specific Number of Records in SQL Server Database

How many records are considered normal for a typical SQL sever database table? I mean, if some of the tables in database contain something like three or four million records, should I consider replacing the hardware, partitioning tables, etc? I have got a query which joins only two tables and has four conditions in its WHERE clause with an ORDER By. This query usually takes 3-4 secs to execute, but once in every 10 or 20 executions, it may take even longer (10 or 20 secs) to execute (I don't think this is related to parameter sniffing b开发者_运维问答ecause I am recompiling the query every time). How can I improve my query to execute in less than a second? How can I know how it can be achieved? How can I know whether increasing the amount of RAM or Adding news Hard Drive or Increasing CUP Speed or even Improving indexes on tables would boost the performance? Any advice on this would be appreciated :)


4 million records is not a lot. Even Microsoft Access might manage that.

Even 3-4 seconds for a query is a long time. 95% of the time when you come across performance issues like this it comes down to:

  • Lack of appropriate indexes;
  • Poorly written query;
  • A data model that doesn't lend itself to writing performant queries;
  • Unparameterized queries thrashing the query cache;
  • MVCC disabled and you have long-running transactions that are blocking SELECTS (out of the box this is how SQL Server acts). See Better concurrency in Oracle than SQL Server? for more information on this.

None of which has anything to do with hardware.

Unless the records are enormous or the throughput is extremely high then hardware is unlikely to be the cause or solution to your problem.


Unless you're doing some heavy-weight joins, 3-4 million rows do not require any extraordinal hardware. I'd first investigated if there are appropriate indexes, if they are used correctly, etc.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜