Insert..Select into InnoDB table with commit after each insert?
I just finished creating a new partitioned table to replace an old, non-partitioned table (renamed for safekeeping). I copied the newest data from the old table into the new table at the time I created it, but I still have roughly half the data left to copy over. The problem is, it's a live web service getting hammered nonstop, and every time I try to copy a chunk over via INSERT..SELECT, it insists on doing it as an atomic transaction (which consumes all the server's resources, slows everything to a crawl, and probably pushes the server dangerously close to running out of physical resources).
Just to be clear: OldTable is MyISAM. NewTable is InnoDB and partitioned by range on its primary key 'a'. Both tables have identical field names. The fields themselves aren't identical, but where they differ, the fields in NewTable are bigger.
The query that's causing problems looks like:
INSERT INTO NewTable (a,b,c,d,e,f,g)
SELECT a,b,c,d,e,f,g
开发者_如何学CFROM OldTable
WHERE a > 300000000 AND a <= 400000000
order by a
What I'd like for it to do: either commit after each insert, or just dispense with transactional integrity entirely and allow dirty reads to happen if they happen.
Locking NewTable (beyond possibly the one single row being inserted) is unacceptable. Locking OldTable is fine, because nothing else is using it anymore, anyway (besides the SQL to copy it to the new table, of course).
Also, is there a way to tell MySQL to do it at the lowest possible priority, and only work on the task in its (relative) free time?
In addition to reducing the number of rows being inserted at a time, try increasing the value of bulk_insert_buffer_size
system variable to something more appropriate for your case? The default value is 8MB.
精彩评论