Scheduled table snapshots in oracle?
I have a table witch is in intense workload lest say T1 (about 100 updates per second, about 300k rows)
And in second table T2 I would like to update rows, based on they're state in T1, I know that it can be easily done by triggers, but in my scenario 开发者_如何学Gotriggers have massive impact on performance and reduces T1 performance to about 10 updates per second...
Now I have created temporary table T3 and I SELECT INTO it the T1 state, after that I do MERGE INTO T1 USING T3.
But still SELECT INTO temp table is to big cost.
Is there any way that I can achieve my scenario without massive reduce of Update operations on T1?
In general, I'd have questions about the data model if you find yourself regularly updating T2 based on the data in T1 in what sounds like a very OLTP-type system. That implies to me a lack of normalization that is concerning. You may have a perfectly valid reason for having a separate T2 table that replicates some of the data in T1 (i.e. you're trying to support DSS-type queries with a denormalized object in addition to the OLTP-type queries with the normalized objects) in which case it would be useful to understand the business purpose of T2.
If this is a case of trying to support DSS-type queries with a denormalized object that combines data from multiple normalized tables, my first thought would be to create T2 as a materialized view, not as a table. Your materialized view can be incrementally refreshed on a scheduled interval (it can also be refreshed when changes are committed but since that happens synchronously, it would likely slow down the inserting sessions). You'd need materialized view logs on the base tables which would add some overhead to the DML operations but it will be less than the overhead of a trigger particularly if each update is updating 300-ish rows (assuming that the 300k is the number of rows updated for the 100 updates per second).
You could also use Streams to maintain T2. This would essentially eliminate the overhead of tracking changes on the DML operations because Streams is just reading the redo data. You'd have to enable supplemental logging if it's not already enabled which may marginally increase the amount of data written to redo, but it's unlikely that would be noticable. It will be a bit more work to configure Streams, however-- you'd need to write a custom apply handler that would take the changes from T1 and update T2. And T2 would always lag T1 by at least a few seconds.
If you are interested in changed data and don't want a trigger to log them, take a look at Change Data Capture, Streams or maybe Golden Gate. The can all present you the Logical Change Records. For this you read the redolog and don't interfere with the foreground processes by hitting them with a trigger. An other option could be to take real good look to your tables and see if you can re-arrange them. As soon as you can define a large part of your table as being historical (and static) you have a lot more options.
精彩评论