开发者

Change of design of queries to improve performance

This is more like a design question but related to SQL optimization as well.

My project has to import a large number of records into the database (more than 100k records). In the meantime, the project has logic to check each record to make sure it meets the criteria which are configurable. It then will mark the record as no warning or has warning in the database. The inserting and warning checking are done within one importing process.

For each criteria it has to query the database. The query needs to join two other tables and sometimes add additional nested query inside the conditions, such as

select * from TableA a 
  join TableB on ... 
  join TableC on ... 
where
  (select count(*) from TableA 
where TableA.Field = Bla) > 100

Although the queries take unnoticeable time, to query the entire record set takes a considerable amount of time which may be 4 - 5 hours on a server. Especially if there are many criteria, at the end the project will stop running the import and rollback.

I've tried changing "SELECT * FROM" to "SELECT TableA.ID FROM" but it seems it has no effect at all. Is there a b开发者_如何学Cetter design to improve the performance of this process?


How about making a temp table (or more than one) that stores the aggregated results of the sub-queries, then indexing that/those with covering indexes.

From your code above, we'd make a temp table grouping on TableA.Field1 and including a count, then index on Field1, theCount. On SQL server the fastest approach would then be:

select * from TableA a 
  join TableB on ... 
  join TableC on ... 
  join (select Field1 from #temp1 where theCount > 100) t on...

The reason this works is that we are doing the same trick twice.

First, we pre-aggregate into the temp table, which is a simple operation and very easy for SQL Server to optimize. So we have taken a piece of the problem and solved in an optimizable way.

Then we repeat this trick by joining to a subquery, putting the filter inside the subquery, so that the join acts as a filter.


I would suggest you batch your records together (500 or so at a time) and send it to a stored proc which can do the calculation.

Use simple statements instead of joins in there. That saves as well. This link might help as well.


Good choice is using indexed view. http://msdn.microsoft.com/en-us/library/dd171921(SQL.100).aspx

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜