Query times out in SP, but runs fine in query analyzer
I seem to be facing a strange issue in SQL 2008.
I have a query which runs fine and fa开发者_运维知识库st from query analyser, but times out if run through a stored procedure! The SP just starts with this query and has no other code before this query
SELECT col1,col2 FROM TBL1 (nolock)
INNER JOIN TBL2 (nolock)
ON tbl1.col=LEFT(tbl2.col1,LEN(tbl2.col1)-2) AND tbl1.col2=RIGHT(tbl2.col1,2)
AND tbl1.col4=2233
AND tbl1.date1 BETWEEN tbl2.date1 and isnull(tbl2.date2,getdate())
Please note that tbl1 is actually a view, where the col and col2 are coming via a self join. Also as per business requirement, tbl2.col1 needs to have concatenated value. If required to solve this issue, I can modify my view though.
As a side issue, please note that if you can make some assumptions about string lengths, your join expression can be simplified (and possibly get better performance because one side is now using equality):
SELECT tbl1.col1, tbl1.col2
FROM
TBL1
INNER JOIN TBL2
ON tbl1.col + tbl1.col2 = tbl2.col1
AND tbl1.col4=2233
AND tbl1.date1 BETWEEN tbl2.date1 AND Coalesce(tbl2.date2, GetDate())
Also, if you're looking for the best performance, try this:
ALTER TABLE TBL2 ADD LeftPart AS (LEFT(col1, LEN(col1)-2));
ALTER TABLE TBL2 ADD RightPart AS (RIGHT(tbl2.col1,2));
CREATE NONCLUSTERED INDEX IX_TBL2_Parts ON TBL2 (LeftPart, RightPart);
Now you can just join like so:
SELECT tbl1.col1, tbl1.col2
FROM
TBL1
INNER JOIN TBL2
ON tbl1.col = tbl2.LeftPart
AND tbl1.col2 = tbl2.RightPart
AND tbl1.col4=2233
AND tbl1.date1 BETWEEN tbl2.date1 AND Coalesce(tbl2.date2, GetDate())
Even better, change your database design to actually store the TBL2.col1 data in two columns. You're violating first normal form by putting two distinct pieces of data in one column, and now, as you're discovering, you're paying for it throughout your application in terms of performance, development & maintenance time, query complexity, and so on.
You could even reverse my scheme so that the LeftPart and RightPart columns are real, and you create a new calculated column that has the Col1 name, with an index to materialize the values and make them searchable. Finally, if absolutely required, you could rename the table, create a view on the table using the old name, and then put an INSTEAD-OF triggers on the view to intercept data operations against the table and translate them into the correct schema.
Update
By the way, if you have any influence on table design you may want to consider using an "open ended date" value of '99991231' or some such for tbl2.date2 rather than NULLs. That Coalesce can kill performance, sometimes forcing a scan when a seek would have been possible.
I've run into similar situations in the past that were caused by parameter sniffing. You might try the method discussed in the article above and see if it makes a difference.
What happens is that when you run the stored procedure for the first time SQL Server caches it's execution plan and uses it going forward. If you run the stored proc with parameters that make this execution plan not optimal you can see the behavior you are describing.
You can also use the query hint recompile
to make sure that it uses a new execution plan every time it executes. To do this you would add OPTION(RECOMPILE)
to the end of your query:
SELECT id, name
from tableName
WHERE id between @min and @max
OPTION(RECOMPILE);
This link goes over several solutions for the parameter sniffing problem.
精彩评论