Speeding up an update trigger on a view
We've got a lot of one to one relationships in a pretty large database. A colleague created a view to pull in all this information into one area for use as a single object in our .NET project.
The view has insert, update and delete triggers which all work well, the only issue is during an update, the takes ~7 seconds to complete which is too long. As the database is very well indexed (although the view is not) I believe it is how the view is executing that is the problem
The structure of the update trigger is as follows:
update table1 set col1 = i.col1, col2 = i.col2 .....
from inserted i
inner join table1 t1 on i.id = t1.id
update table2 set col1 = i.t2_col1, col2 = i.t2_col2......
from inserted i
inner join table2 t2 on i.id = t2.t1Id
if @@rowcount = 0 begin
insert table2 (t1Id, col1, col2....)
select i.t1Id, i.t2_col1, i.t2_col2..... from inserted i
end
/* There are an additional 9 update .... if @@rowcount = 0 insert statements */
So my understanding is that regardless of which property gets updated, all of the update statements are getting executed, therefore causing the performance issue.
I'm resiging to the fact that we're going to have to re-write the data access 开发者_开发百科layer to get rid of this view, but I thought I'd asked whether there are any SQL based suggestions on how to speed this trigger up
Any help would be greatly appreciated
Well without knowing exactly what you are doing it is hard to say. I would image you could add some where clauses to those update statements so updates are only run on certain conditions. You could also potentially wrap the update statements in some if statements and check the inserted record(s) warrants running the update. You could also potentially index the view to help speed things up.
Where you say:
regardless of which property gets updated, all of the update statements are getting executed
In a sqlserver trigger you can do something like
IF UPDATE(Col1)
BEGIN
-- do col1 specific updates
END
See the create trigger docs: http://msdn.microsoft.com/en-us/library/ms189799%28v=SQL.90%29.aspx
Also when you say "well indexed" note that indexes speed up queries but slow down updates
I have found sometimes that if you "point out the obvious" to SQL Server that it can perform better. That is, if you tell it to only do the update if there's something to actually change. For instance, where you have:
update t1 set col1 = i.col1, col2 = i.col2 .....
from inserted i
inner join table1 t1 on i.id = t1.id
You can add a where condition:
where t1.col1 <> i.col1 or t1.col2 <> i.col2 ...
(Assuming all of the columns aren't nullable - otherwise you need more checks). Might be worth seeing if you get any benefit from this.
If you have nullable columns, for each such column you'll need to do 3 checks, like so:
where
(t1.col1 <> i.col1 or (t1.col1 is null and i.col1 is not null) or (t1.col1 is not null and i.col1 is null)) or
(t1.col2 <> i.col2 or (t1.col2 is null and i.col2 is not null) or (t1.col2 is not null and i.col2 is null))
A bit late to the party, but I've learned (the hard way) that the inserted
and deleted
pseudo-tables are weird beasts; especially when the amount of data involved grows.
As you can't really index them or add statistics to them (AFAIK), the query plans will often use a non-optimal approach when dealing with the information in there.
Sometimes it actually helps to extract the relevant data from those pseudo-tables into #temp-tables and work from there. Since in this case it looks like the inserted
table is being used over and over again on the id
field, it might be beneficial to work from a #temp-table that has an index on the id
field.
Further reading: Slow join on Inserted/Deleted trigger tables
精彩评论