How to migrate large amounts of data from old database to new
I need to move a huge amount of data from a couple tables in an old database to a couple different tables in a new database. The databases are SQL Server 2005 and are on the same box and sql server instance. I was told that if I try to do it all in one shot that the transaction log would fill up. Is there a way to disable the transaction log per table? If not, what is a good method for doing this? Would a cursor 开发者_Python百科do it? This is just a one-time conversion.
Use the import/export data wizard in SQL Server Management Studio. Here's how
- Right click on your database in the Object Explorer
- Choose Tasks -> Import Data
- Complete the "Choose Source" window
- Complete the "Choose Destination" window
- Choose "Copy Data from one or more tables or views"
- Choose your tables
Do not use a cursor, that would take forever!
Another alternative would be to perform the inserts in a loop that limits the number of rows included in a single transaction (thereby preventing the transaction log from growing too much).
SET ROWCOUNT 10000
WHILE EXISTS(SELECT * FROM tblA LEFT JOIN tblB on tblA.ID = tblB.ID WHERE tblB.ID IS NULL)
BEGIN
BEGIN TRAN
INSERT tblB
SELECT * FROM tblA LEFT JOIN tblB on tblA.ID = tblB.ID WHERE tblB.ID IS NULL
COMMIT
PRINT CAST(@@ROWCOUNT AS VARCHAR) + ' rows have been inserted.'
END
SET ROWCOUNT 0 --to reset
An oldie but a goodie:
http://www.tek-tips.com/faqs.cfm?fid=3141
A very large INSERT INTO db2.t SELECT * FROM db1.t
can indeed fill up your transaction log - as it will take a large amount of space prior to committing the transaction. But this size depends on a lot of things - number of rows, size of rows, size of transaction log, etc.
I know this probably sounds silly, but depending on your environment risk (and what the likelihood of it really filling up the transaction log), I might just go for it. All you risk is the transaction log filling up, your transaction rolling back and the other database users being slightly inconvenienced until that space is freed up again. If it's development or a new database, people might not notice.
You can split it up into batches of "reasonable" size (again, this depends). If that is simply too difficult (no good partitioning strategy), use SSIS and do the transfer and set a batch commit limit.
In either case, if the operation is interrupted, you will only have ACID on the committed transactions, obviously. Sometimes it is easier to TRUNCATE the destination table and reattempt the entire transfer, depending on how long (depending on indexes and unique key availability) it can actually take to determine which rows were transferred and which were not.
精彩评论