SQL Server , How to compare speeds of simple Queries
I have a big query and I am tring to improve it part by part, however due to the caching mechanism, and simplicity of t-sql codes, I don't have a reliable environment for testing the speeds. The queries that I am tring to improve about speeds are all last about 1 or 2 seconds, so I can't see the difference clearly. And creating dummy data for each comparision takes too much time. What do you suggest me to do ? I am using my company database, so removing cache everytime can be harmful I guess.
Edit: After reading all the comments, I made some tring and I got some idea. But looking all those values in statistics does it exactly what I want ?
Here are the problems that I faced:
Execution Plan: First I run some queries and looked at Execution Plan, at the top - Query cost (Relative to the batch) I couldn't get a value other than 0.00%. Even my query lasts more than 1 minutes. Only thing I get is 0.00%. And under the graphs, all the values are 0%
DB Statistics. Now I am testing two queries. One of them is
SELECT * FROM My_TABLE /* WHERE
my_primarykey LIKE '%ht_atk%' */And the second one is the comment free version.
SELECT * FROM My_TABLE WHERE
my_primarykey LIKE '%ht_atk%'Here my results from DB Statistics, first query:.
Application Profile Statistics
Timer resolution (milliseconds) 0 0
Number of INSERT, UPDATE, DELETE statements 0 0
Rows effected by INSERT, UPDATE, DELETE statements 0 0
Number of SELECT statements 2 2
Rows effected by SELECT statements 16387 15748,4
Number of user transactions 7 6,93182
Average fetch time 0 0
Cumulative fetch time 0 0
Number of fetches 0 0
Number of open statement handles 0 0
Max number of opened statement handles 0 0
Cumulative number of statement handles 0 0
Network Statistics
Number of server roundtrips 3 3
Number of TDS packets sent 3 3
Number of TDS packets received 252 242,545
Number of bytes sent 868 861,091
Number of bytes received 1,01917e+006 981160
Time Statistics
Cumulative client processing time 0 0,204545
Cumulative wait time on server replies 25 10,0455
Second Query:
Application Profile Statistics
Timer resolution (milliseconds) 0 0
Number of INSERT, UPDATE, DELETE statements 0 0
Rows effected by INSERT, UPDATE, DELETE statements 0 0
Number of SELECT statements 2 2
Rows effected by SELECT statements 14982 15731,3
Number of user transactions 5 6,88889
Average fetch time 0 0
Cumulative fetch time 0 0
Number of fetches 0 0
Number of open statement handles 0 0
Max number of opened statement handles 0 0
Cumulative number of statement handles 0 0
Network Statistics
Number of server roundtrips 3 3
Number of TDS packets sent 3 3
Number of TDS packets received 230 242,267
Number of bytes sent 752 858,667
Number of bytes received 932387 980076
Time Statistics
Cumulative client processing time 1 0,222222
Cumulative w开发者_StackOverflow社区ait time on server replies 8 10
Every single time I execute, the values are randomly changing and I am not able to catch a good view about which query is faster.
Lastly when I do that:
SET STATISTICS TIME ON SET STATISTICS IO ON
For both queries, the results are same.
Table 'my_TABLE'. Scan count 1, logical reads 682, physical reads 0, read-ahead reads 0.
So again I couldn't make a comparision between the two queries. how to interpret the results ? Am I looking to the wrong place. How can I compare those two simple queries above ?
Run the set statistics time on
and set statistics io on
then run the big query in text mode. You can put some prints after each part of the query you want to optimize.
You will get lines like:
Table 'Table'. Scan count 1, logical reads 10, physical reads 0, read-ahead reads 0, lob logical reads 387, lob physical reads 0, lob read-ahead reads 0.
Try to put some data in the tables and check the Scan Count and logical reads for big numbers.
You can also check the Actual Execution Plan and search for any clustered index scan. This may indicate that there is a missing index in some table.
Use the query analyzer to find out the expensive parts of your query (this depends on DB statistics, so use representative data).
This will let you zero in on the parts you should optimize.
Trying to time things with a stopwatch or looking at the time it takes for the results to return to SSMS will be guesswork at best.
Good way is too see execution plain. It tell alot about how query will execute and what is taking most of the time. You can even decide to create indexes on that bases. Its very usefull specially of large queries. SQLServer most of the time find best possible way to execute query but you can improve that by providing it with index on field that are used in WHERE and JOIN statements. If you cannot read execution plain which is like graph with estimated cost and timing you can read in detail about it from MSDN.
In query analyzer go to Query > Include Actual Execution Plan and Query > Include Client Statistics.
Use the Execution Plan to identify the most costly parts of your query. When you mouse over or click any of the nodes it will show you a whole group of statistics. Try to see if you can rework a join or filter to reduce the number of rows returned.
Use Client Statistics to compare two queries. Each time you run your query it will add a new column to the client stats page. You want to look at the bottom group: Time Statistics.
I know some of these are obvious, but here are a few general tips for reducing your load: -Return only the columns you need. Sometimes people return all columns, or some identifier columns that they use for coding but the end user doesn't need. -For each table - reduce the number of rows returned. -Try not to use temp tables when you don't have to. This causes a 'double dip', or querying the same very large table multiple times.
as @affan said, the best way is to use the information given by the execution plan. but you can always set up a simple counter with code like
IF @debug > 0 BEGIN
DECLARE @now DATETIME;
SET @now = CURRENT_TIMESTAMP;
END
and
IF @debug > 0 BEGIN
SELECT DATEDIFF(ms,@now,CURRENT_TIMESTAMP)/1000.0 AS Runtime;
END
精彩评论