How do change this procedure to delete the records day by day in SQL Server 2005
Hey all, I am just getting lazy now. But, I know you all can help me and that's a great thing.
I have a table cleanup and delete proceedure that runs every night. It's basic enough and it is suppossed to delete any records in the database that are older than 3 days.
The proceedure is failing each night because when we delete more than 1 days worth of records at a time, I get a transaction log full error. It would be a pain to change the transaction log settings.
SOO, can somebody show me how to update the proc so that it deletes everything back from 3 days ago, lets say through 7 days. Meaning go back 3 days, then delete each table's data one at a time for the past 7 days.
We're a 24x7 monitored energy shop, and I can be sure if it doesn't run for 7 days someone will notice.
ALTER PROCEDURE [dbo].[PruneData]
(
@cutoffDate DateTime
)
AS
BEGIN
declare @threeDayCutoffDate DATETIME
set @threeDayCutoffDate = dateadd(hh, 5, DATEADD(dd, -3,dbo.DateOnly(getutcdate())))
delete from LMP_DayAhead where interval < @threeDayCutoffDate
delete from LMP_RealTime where interval < @threeDayCutoffDate
delete from LMP_RealTimeIntegrated where interval < @threeD开发者_如何学GoayCutoffDate
delete from ZonalMCP where interval < @threeDayCutoffDate
delete from SyncJob where synctime < @threeDayCutoffDate
RETURN
END
To reduce the log pressure you need to delete in batches. Something like this:
ALTER PROCEDURE [dbo].[PruneData]
(
@cutoffDate DateTime
)
AS
BEGIN
declare @threeDayCutoffDate DATETIME
declare @rows bigint;
declare @batchsize int;
set @threeDayCutoffDate = dateadd(hh, 5, DATEADD(dd, -3,dbo.DateOnly(getutcdate())))
set @batchsize = 1000;
while (1=1)
begin
set @rows = 0;
delete top (@batchsize) from LMP_DayAhead where interval < @threeDayCutoffDate;
set @rows = @rows + @@ROWCOUNT;
delete top (@batchsize) from LMP_RealTime where interval < @threeDayCutoffDate
set @rows = @rows + @@ROWCOUNT;
delete top (@batchsize) from LMP_RealTimeIntegrated where interval < @threeDayCutoffDate
set @rows = @rows + @@ROWCOUNT;
delete top (@batchsize) from ZonalMCP where interval < @threeDayCutoffDate
set @rows = @rows + @@ROWCOUNT;
delete top (@batchsize) from SyncJob where synctime < @threeDayCutoffDate
set @rows = @rows + @@ROWCOUNT;
if 0 = @rows
begin
break;
end
end
RETURN
END
There are many ways to optimize this, eg. once a table is pruned skip it on next loop iteration. The ultimate optimization is to avoid the delete completely. Instead use a rolling window partition switch scheme: partition your tables by day and have every day switch out the last partition and switch in a new one. This is almost instantaneous. See Transferring Data Efficiently by Using Partition Switching, Partitioned Tables and Indexes in SQL Server 2005 and How to Implement an Automatic Sliding Window in a Partitioned Table on SQL Server 2005
Even if you broke it into batches, from start to end you will need to delete the same amount of records within the same timeframe (of seconds/minutes). The transaction log needs to keep all of these recorded until you have either done a full or log backup.
What you need to do is instead increase the transaction log backup frequency, and put log backups between the batches of your deletion runs. So let's say you expect roughly 1 million records to be deleted per day. Even though you have said to go back and delete 7 days (offset by 3), let's design the system to cope with 3 days at a time. So if you had not run it for 7 days (shouldn't happen since you state this runs every day), it will take 3 days to catch up (doing 3 a day).
Next step is to break it into batches.
delete top(10000) from LMP_DayAhead where interval < @threeDayCutoffDate
delete top(30000) from LMP_RealTime where interval < @threeDayCutoffDate
delete top(20000) from LMP_RealTimeIntegrated where interval < @threeDayCutoffDate
delete top(50000) from ZonalMCP where interval < @threeDayCutoffDate
delete top(1000) from SyncJob where synctime < @threeDayCutoffDate
Where each TOP(X) is designed to cover about 1 day's worth.
Finaly, schedule extra transaction log backups at 1:00am and 1:20am, and for this job to run at 12:50am, 1:10am, 1:30am, each batch taking 1 day's worth of data off. The transaction log backups in between will keep the log clean for the next run.
This is the general strategy but you can tweak it to suit your needs.
精彩评论