开发者

is insert-select statement massive?

When multiple inserts are used with a select statement in a transaction, how does the database keep track of the changes during the tr开发者_StackOverflow社区ansaction? Can there be problems with resources (such as memory or hard disk space) if a transaction is held open too long?


The short answer is, it depends on the size of the select. The select is part of the transaction, technically, but most selects don't have to be "rolled back", so the actual log of DB changes wouldn't include the select by itself. What it WILL include is a new row for every result from the select statement as an insert statement. If that select statement is 10k rows, the commit will be rather large, but no more so than if you'd written 10k individual insert statements within an explicit transaction.


Exactly how this works depends on the database. For example, in Oracle, it will require UNDO space (and eventually, if you run out, your transaction will be aborted, or your DBA will yell at you). In PostgreSQL, it'll prevent the vacuuming of old row versions. In MySQL/InnoDB, it'll use rollback space, and possibly cause lock timeouts.

There are several things the database must use space for:

  1. Storing which rows your transaction has changed (the old values, the new values, or both) so that rollback can be performed
  2. Keeping track of which data is visible to your transaction so that a consistent view is maintained (in transaction isolation levels other than read uncommitted). This overhead will often be greater the more isolation you request.
  3. Keeping track of which data is visible to other transactions (unless the whole database is running in read uncommitted)
  4. Keeping track of which objects which transactions have changed, so isolation rules are followed, especially in serializable isolation. (Probably not much space, but plenty of locks).

In general, you want your transactions to commit as soon as possible. So, e.g., you don't want to hold one open on an idle connection. How to best batch inserts depends on the database (often, many inserts on one transaction is better than one transaction per insert). And of course, the primary purpose of transactions is data integrity.


You can have many problems with the large transaction. First, in most databases you do not want to run row-by-row because for a million records that will take hours. But to insert a million records in one complex statement can cause locking on the tables involved and harm performance for everyone else. And a rollback if you kill the transaction can take a good while too. Usually the best alternative is to loop in batches. I usually test 50,000 at a time and raise or lower the set depending on how long that takes. I've had some databases where I do no more that 1000 in one set-based operation. If possible large inserts or updates should be scheduled for the off-peak hours that the database operates. If really large (and one-time - usually a large data migration) you might even want to close the database for maintenance, put it in single user mode and drop the indexes, do the insert and reindex.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜