开发者

SQL Server stored procedure intermediate tables

In SQL Server 2005, I have a query that involves a bunch of large-ish joins (each table is on the order of a few thousand rows to a few million rows, and the tables average probably the equivalent of 10-15 columns of integers and datetimes.

To make the query faster, I am thinking about splitting up the one big query into a stored procedure that does a couple of the joins, stores that result in some temporary table, and then joins that temporary table with another开发者_如何转开发 temporary table that was also the result of a few joins.

I am currently using table variables to store the intermediate tables, and the performance one off is noticeably better. But in production, tempdb seems to be having an IO bottleneck.

Is there a better way to think about solving such a problem? I mean, is using table variables way off base here?


Table Variables can take up quite a lot of memory in TempDB.

In large production environments, I have seen better SQL coders than I utilize standard tables for this purpose; They are essentially temp tables, but they create them as regular tables and give them a special prefix or suffix. This has the added benefit (as with temp tables) of being able to utilize indexes to help with execution.

If you can use a standard table or utilize a temp table that is accessed by all steps of your complex execution, you may be able to resolve your memory problem.

Think of it as a place to cache data. In fact, you can update this "cache" each time your main stored procedure runs, just make sure to utilize appropriate transactions and locking.

Imagine the alternative -- If you use an enormous table variable in a stored procedure, and the stored procedure is executed 10 or 20 times simultaneously... that table variable may no longer be living just in memory.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜