SQL queries result size - does it matter much?
I have an sql query (MyQSL DB, using .Net's SqlClient) that returns a dataset. If the dataset's size is 4000 rows long or it's 20, is there a significant difference in runtime?
Also, how much of the query's execution time is spent in ope开发者_JAVA百科ning a connection etc. comparing to the time to fill the results' table?Thanks.
Clarification (edit):
Each of the 4000 rows represent a folder's name. I want to filter them so that users won't have to see them all but only those of interest for them. I'm not sure what's better: Filtering before getting the names from the DB (it is on a different computer), which might make my query more complicated, or only filter the view (the tree that the user sees). Those 4,000 rows might turn to 40,000, but I'm not sure it will be relevant for this issue. Thanks again.Editing again:
The DB is on network, but the connection is quite fast, let's say 100Mbit.no, there's not an significant difference. selecting 4000 rows wouldn't take 200 times the duration of selecting 20 rows - the difference is minimal, i think you won't notice it anyway.
but: you could get problems if your network connection is too slow, so you should take a look at that.
The execution time spent on opening a connection or rather establishing a connection is negligible. Also I do not think there will be any particular considerable time spent on fetching records in order of thosands, because the databases are highly optimized to service millions of records for example. However as the dataset size increases, there will be some performance lags in runtime in your application rather than the query fetch time.
Yes or not. Forget execution - there it is none. It is about network transfer speed (if the database is on another computer) and your own local behavior. Not so much the dataset itself, but what you do with it. A browser takes a longer time to render a table with 4000 rows than with 20, etc.
WIthin THOSE limits the impact should be low. It gets more interesting if 4000 turn into 40.000.
Ask yourself this. If you have to get 200 times a big file from a web server will it take longer? I think the the db difference is dependant in the db design but won't be an issue. The code performance on putting the data into rows on the client side will be pretty much not an issue. However the issue will be transferring the data over the network.
Of course your network connection may be super fast so that it's still insignificant.
First, there is network I/O cost, which is exactly linear to the size of the result.
Then there are things like buffer sizes and packet sizes and cache sizes, which can make a query fall off a non-linear cliff once it reaches a certain size.
And then, if your table is such that the first 20 matching rows can be quickly found, whereas the 21st row only appears after a lengthy scanning process, then limiting the result to 20 rows will yield much faster results.
What you need to do is profile your particular setup. That is the only way to be sure.
精彩评论