开发者

Efficient use of SQL Transactions

My application currently needs to upload a large amount of data to a database server (SQL Server) and local开发者_如何转开发ly on a SQLite database (local cache).

I have always used Transactions when inserting data to a database for speed purposes. But now that I am working with something like 20k rows or more per insert batch, I am worried that Transactions might cause issues. Basically, what I don't know is if Transactions have a limit on how much data you can insert under them.

What is the correct way to use transactions with large amounts of rows to be inserted in a database? Do you for instance begin/commit every 1000 rows?


No there is no such limit. Contrary to what you might believe, SQLite writes pending transactions into the database file, not RAM. So you should not run into any limits on the amount of data you can write under a transaction.

See SQLite docs for these info: http://sqlite.org/docs.html

Follow the link "Limits in SQLite" for implementation limits like these.

Follow the link "How SQLite Implements Atomic Commit" for how transactions work


I dont see any problems doing this but if there are any constraint/ referential integrity errors then probably you got insert them all again and also the table is locked till the time the transaction is commited. Breaking down into smaller portions while logging activity in each batch will help.

A better option would be to BCP insert them into the target while dealing with many rows or even an SSIS package to do this.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜