Temporary table vs Table variable
I have a table where I have around 1.5 million+ results that I need to delete. Previously, I was using a temporary table and this caused the transaction log to increase in size quite quickl开发者_如何转开发y. Problem is, once I have done one result set, I need to move onto another where there is another 1.5 million+ results. The performance of this is rather slow and I'm wondering if I should use a table
variable rather than writing a table to the temp database.
EDIT
I use the temporary table when I select the initial 1.5million+ records.
Side-stepping the table variable vs. temp table question, you're probably better off batching your deletes into smaller groups inside of a while loop. That's your best bet for keeping the transaction log size reasonable.
Something like:
while (1=1) begin
delete top(1000)
from YourTable
where ...
if @@rowcount < 1000 break
end /* while */
In general, I prefer using table variables over temp tables, if only because they're easier to use. I find few cases where the use of temp tables is warranted. You don't talk about how you're using temp tables in your routines, but I suggest benchmarking the two options.
A table variable is often not suitable for such large resultsets, being more appropriate for small numbers. You'd likely find that the table variable's data would be written to tempdb anyway due to its size. Personally I have found table variables to be much slower than temporary tables when dealing with large resultsets. In an example mentioned at the end of this article on SQL Server Central using 1 million rows in a table of each time, the query using the temporary table took less than a sixth of the time to complete. Personally I've found table variables to often suffer performance-wise when I have to join them to real tables in a query.
If the performance is slow it may be at least partly the settings on the database itself. Is it set to automatically grow? What's the recovery model of it?
精彩评论