开发者

How to copy a huge table data into another table in SQL Server

I have a table with 3.4 million rows. I want to copy this whole data into another table.

I am performing this task 开发者_如何学运维using the below query:

select * 
into new_items 
from productDB.dbo.items

I need to know the best possible way to do this task.


I had the same problem, except I have a table with 2 billion rows, so the log file would grow to no end if I did this, even with the recovery model set to Bulk-Logging:

insert into newtable select * from oldtable

So I operate on blocks of data. This way, if the transfer is interupted, you just restart it. Also, you don't need a log file as big as the table. You also seem to get less tempdb I/O, not sure why.

set identity_insert newtable on
DECLARE @StartID bigint, @LastID bigint, @EndID bigint
select @StartID = isNull(max(id),0) + 1
from newtable

select @LastID = max(ID)
from oldtable

while @StartID < @LastID
begin
    set @EndID = @StartID + 1000000

    insert into newtable (FIELDS,GO,HERE)
    select FIELDS,GO,HERE from oldtable (NOLOCK)
    where id BETWEEN @StartID AND @EndId

    set @StartID = @EndID + 1
end
set identity_insert newtable off
go

You might need to change how you deal with IDs, this works best if your table is clustered by ID.


If you are copying into a new table, the quickest way is probably what you have in your question, unless your rows are very large.

If your rows are very large, you may want to use the bulk insert functions in SQL Server. I think you can call them from C#.

Or you can first download that data into a text file, then bulk-copy (bcp) it. This has the additional benefit of allowing you to ignore keys, indexes etc.

Also try the Import/Export utility that comes with the SQL Management Studio; not sure whether it will be as fast as a straight bulk-copy, but it should allow you to skip the intermediate step of writing out as a flat file, and just copy directly table-to-table, which might be a bit faster than your SELECT INTO statement.


I have been working with our DBA to copy an audit table with 240M rows to another database.

Using a simple select/insert created a huge tempdb file.

Using a the Import/Export wizard worked but copied 8M rows in 10min

Creating a custom SSIS package and adjusting settings copied 30M rows in 10Min

The SSIS package turned out to be the fastest and most efficent for our purposes

Earl


Here's another way of transferring large tables. I've just transferred 105 million rows between two servers using this. Quite quick too.

  1. Right-click on the database and choose Tasks/Export Data.
  2. A wizard will take you through the steps but you choosing your SQL server client as the data source and target will allow you to select the database and table(s) you wish to transfer.

For more information, see https://www.mssqltips.com/sqlservertutorial/202/simple-way-to-export-data-from-sql-server/


If it's a 1 time import, the Import/Export utility in SSMS will probably work the easiest and fastest. SSIS also seems to work better for importing large data sets than a straight INSERT.

BULK INSERT or BCP can also be used to import large record sets.

Another option would be to temporarily remove all indexes and constraints on the table you're importing into and add them back once the import process completes. A straight INSERT that previously failed might work in those cases.

If you're dealing with timeouts or locking/blocking issues when going directly from one database to another, you might consider going from one db into TEMPDB and then going from TEMPDB into the other database as it minimizes the effects of locking and blocking processes on either side. TempDB won't block or lock the source and it won't hold up the destination.

Those are a few options to try.

-Eric Isaacs


Simple Insert/Select sp's work great until the row count exceeds 1 mil. I've watched tempdb file explode trying to insert/select 20 mil + rows. The simplest solution is SSIS setting the batch row size buffer to 5000 and commit size buffer to 1000.


I know this is late, but if you are encountering semaphore timeouts then you can use row_number to set increments for your insert(s) using something like

INSERT INTO DestinationTable (column1, column2, etc) 
 FROM ( 
SELECT ROW_NUMBER() OVER (ORDER BY ID) AS RN , column1, column2, etc
FROM SourceTable ) AS A
WHERE A.RN >= 1 AND A.RN <= 10000 )

The size of the log file will grow, so there is that to contend with. You get better performance if you disable constraints and index when inserting into an existing table. Then enable the constraints and rebuild the index for the table you inserted into once the insertion is complete.


I like the solution from @Mathieu Longtin to copy in batches thereby minimising log file issues and created a version with OFFSET FETCH as suggested by @CervEd.

Others have suggested using the Import/Export Wizard or SSIS packages, but that's not always possible.

It's probably overkill for many but my solution includes some checks for record counts and outputs progress as well.

USE [MyDB]
GO

SET NOCOUNT ON;
DECLARE @intStart int = 1;
DECLARE @intCount int;
DECLARE @intFetch int = 10000;
DECLARE @strStatus VARCHAR(200);
DECLARE @intCopied int = 0;

SET @strStatus = CONVERT(VARCHAR(30), GETDATE()) + ' Getting count of HISTORY records currently in MyTable...';
RAISERROR (@strStatus, 10, 1) WITH NOWAIT;
SELECT @intCount = COUNT(*) FROM [dbo].MyTable WHERE IsHistory = 1;
SET @strStatus = CONVERT(VARCHAR(30), GETDATE()) + ' Count of HISTORY records currently in MyTable: ' + CONVERT(VARCHAR(20), @intCount);
RAISERROR (@strStatus, 10, 1) WITH NOWAIT;  --(note: PRINT resets @@ROWCOUNT to 0 so using RAISERROR instead)
SET @strStatus = CONVERT(VARCHAR(30), GETDATE()) + ' Starting copy...';
RAISERROR (@strStatus, 10, 1) WITH NOWAIT;

WHILE @intStart < @intCount
BEGIN

    INSERT INTO [dbo].[MyTable_History] (
        [PK1], [PK2], [PK3], [Data1], [Data2])
    SELECT
        [PK1], [PK2], [PK3], [Data1], [Data2]
    FROM [MyDB].[dbo].[MyTable]
    WHERE IsHistory = 1
    ORDER BY 
        [PK1], [PK2], [PK3]
        OFFSET @intStart - 1 ROWS 
        FETCH NEXT @intFetch ROWS ONLY;

    SET @intCopied = @intCopied + @@ROWCOUNT;
    SET @strStatus = CONVERT(VARCHAR(30), GETDATE()) + ' Records copied so far: ' + CONVERT(VARCHAR(20), @intCopied); 
    RAISERROR (@strStatus, 10, 1) WITH NOWAIT;

    SET @intStart = @intStart + @intFetch;

END

--Check the record count is correct.
IF @intCopied = @intCount
    BEGIN
        SET @strStatus = CONVERT(VARCHAR(30), GETDATE()) + ' Correct record count.'; 
        RAISERROR (@strStatus, 10, 1) WITH NOWAIT;
    END
ELSE
    BEGIN
        SET @strStatus = CONVERT(VARCHAR(30), GETDATE()) + ' Only ' + CONVERT(VARCHAR(20), @intCopied) + ' records were copied, expected: ' + CONVERT(VARCHAR(20), @intCount);
        RAISERROR (@strStatus, 10, 1) WITH NOWAIT;
    END


GO


If your focus is Archiving (DW) and are dealing with VLDB with 100+ partitioned tables and you want to isolate most of these resource intensive work on a non production server (OLTP) here is a suggestion (OLTP -> DW) 1) Use backup / Restore to get the data onto the archive server (so now, on Archive or DW you will have Stage and Target database) 2) Stage database: Use partition switch to move data to corresponding stage table
3) Use SSIS to transfer data from staged database to target database for each staged table on both sides 4) Target database: Use partition switch on target database to move data from stage to base table Hope this helps.


select * into new_items from productDB.dbo.items

That pretty much is it. THis is the most efficient way to do it.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜