开发者

Does the speed of the query depend on the number of rows in the table?

Let's say I have this query:

select * from table1 r where r.x = 5

Does the speed of this query depend o开发者_StackOverflow社区n the number of rows that are present in table1?


The are many factors on the speed of a query, one of which can be the number of rows.

Others include:

  • index strategy (if you index column "x", you will see better performance than if it's not indexed)
  • server load
  • data caching - once you've executed a query, the data will be added to the data cache. So subsequent reruns will be much quicker as the data is coming from memory, not disk. Until such point where the data is removed from the cache
  • execution plan caching - to a lesser extent. Once a query is executed for the first time, the execution plan SQL Server comes up with will be cached for a period of time, for future executions to reuse.
  • server hardware
  • the way you've written the query (often one of the biggest contibutors to poor performance!). e.g. writing something using a cursor instead of a set-based operation

For databases with a large number of rows in tables, partitioning is usually something to consider (with SQL Server 2005 onwards, Enterprise Edition there is built-in support). This is to split the data down into smaller units. Generally, smaller units = smaller tables = smaller indexes = better performance.


Yes, and it can be very significant.

If there's 100 million rows, SQL server has to go through each of them and see if it matches. That takes a lot more time compared to there being 10 rows.

You probably want an index on the 'x' column, in which case the sql server might check the index rather than going through all the rows - which can be significantly faster as the sql server might not even need to check all the values in the index.

On the other hand, if there's 100 million rows matching x = 5, it's slower than 10 rows.


Almost always yes. The real question is: what is the rate at which the query slows down as the table size increases? And the answer is: by not much if r.x is indexed, and by a large amount if not.


Not the rows (to a certain degree of course) per se, but the amount of data (columns) is what can make a query slow. The data also needs to be transfered from the backend to the frontend.


The Answer is Yes. But not the only factor. if you did appropriate optimizations and tuning the performance drop will be negligible Main Performance factors

  • Indexing Clustered or None clustered
  • Data Caching
  • Table Partitioning
  • Execution Plan caching
  • Data Distribution
  • Hardware specs

There are some other factors but these are mainly considered. Even how you designed your Schema makes effect on the performance.


You should assume that your query always depends on the number of rows. In fact, you should assume the worst case (linear or O(N) for the example you provided) and exponential for more complex queries. There are database specific manuals filled with tricks to help you avoid the worst case but SQL itself is a language and doesn't specify how to execute your query. Instead, the database implementation decides how to execute any given query: if you have indexed a column or set of columns in your database then you will get O(log(N)) performance for a simple lookup; if the system has effective query caching you might get O(1) response. Here is a good introductory article: High scalability: SQL and computational complexity

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜