开发者

Oracle pl sql 10g - move set of rows from a table to a history table with same structure

PL SQL moves older versions of data from a transaction table to a history table of same structure and archive for a certain period -

for each record
insert into tab_hist (select older_versions of current row);
delete from tab (select older_versions of current row);
END

ps: earlier we were not archiving(no insert) - but after adding the insert it has doubled the run time - so can we accomplish insert and delete wi开发者_JS百科th a single select statement? as there is large data to be processed and across multiple table


This is a batch operation, right? In which case you should avoid Row By Row and use set processing. SQL is all about The Joy Of Sets.

Oracle has fantastic bulk SQL processing capabilities. The pseudo code you paosted would look something like this:

declare
    cursor c_oldrecs is
        select * from your_table
        where criterion between some_date and some_other_date;
    type rec_nt is table of your_table%rowtype;
    oldrecs_coll rec_nt;    
begin
     open c_oldrecs;
     loop
         fetch c_oldrecs into oldrecs_coll limit 1000;
         exit when oldrecs_coll.count() = 0;

         forall i in oldrecs_coll.first() oldrecs_coll.last()
             insert into your_table_hist
             values oldrecs_coll(i);
         forall i in oldrecs_coll.first() oldrecs_coll.last()
             delete from your_table
             where pk_col = oldrecs_coll(i).pk_col;
     end loop;
end;
/   

This bulk processing is faster because it sends one thousand statements to the database at a time, instead of switching between PL/SQL and SQL one thousand times. The LIMIT 1000 clause is there to prevent a really huge selection blowing the PGA. This safeguard may not be necessary in your case, or perhaps you can work with a higher value.


I think your current implementation is wrong. It is better to keep only the current version in the live table, and to keep all the historical versions in a separate table from the off. Use triggers to maintain the history as part of every transaction.


It may be that the slowness you are seeing is due to the logic that selects which rows are to be moved. If so, you might get better results by doing the select once to get the rowids into a nested table in memory, then doing the insert and the delete based on that list; or alternatively, driving your loop with a query that selects the rows to be moved.

You might instead consider creating a trigger on insert that will move the existing rows that "match" the row being inserted. This will slow down the inserts somewhat, but would mean you don't need any process to move the old rows in bulk.


If you are on Enterprise edition with the partitioning option, look at partition exchange.


As simple as this

CREATE BACKUP_TAB AS SELECT * FROM TAB

If you are deleting a lot of rows you will be hitting your undo tablespace and a delete which deletes say 100k rows can cause performance issues. You are better of deleting by batch say 5k rows at a time and committing.

BEGIN
-- Where condition on insert and delete must be the same
loop
  INSERT INTO BACKUP_TAB SELECT * FROM TAB WHERE 1=1 and rownum < 5000; --Your condition here
  exit when SQL%rowcount < 4999;
  commit;
end loop;
loop
  DELETE FROM TAB
   where 1=1--Your condition here
     and rownum < 5000;
  exit when SQL%rowcount < 4999;
  commit;
end loop;
commit;
END;
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜