开发者

Performance Overhead on SQL Server Temp Tables

I use temp tables frequently to simplify data loads (easier debugging, cleaner select statements, etc). If performance demands it, I'll create a physical table etc.

I noticed recently that I automatically declare my temp tables as global (##temp_load) as opposed to local (#temp_table). 开发者_开发技巧 I don't know why but that's been my habit for years. I never need the tables to be global but I'm curious if there is additional overhead for creating them as global. And should I work on changing my habits.

Are there additional risks for making them global?


Non-Global temp tables are pretty much guaranteed never to collide.

Global temp tables are similar to materialized tables in that the name needs to be unique per server.

As a rule, only use ##GLOBAL_TEMP tables when you must.

Otherwise, if you are writing a proc that could me run more than once simultaneously, the procs will interact with each other in unpredictable ways, making it extremely difficult to troubleshoot - Instance 1 can change data being used by Instance 2 which causes Instance 3 to generate incorrect results as well.

My personal opinion on Temp tables is that I only use them when:

  • I have a medium-to-large resultset (more than 1m rows)
  • I will need to index that resultset
  • I will not need to use that resultset more than once per iteration of the process
  • I am confident I will not need to resume the process at any point

I highlighted that last bullet because this is the main reason I try to minimize temp table use:

If you have a long-running process, and you use temp tables to store intermediate data sets, and something dies say 90% of the way through, you have to completely restart if that data is not in a materialized table most of the time.

Some of my processes run for days on billions of rows of data, so I am not interested in restarting from scratch ever.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜