开发者

Can a large transaction log cause cpu hikes to occur

I have a client with a very large database on Sq开发者_如何转开发l Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out.

I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan.

The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur.

I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk.

I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation).

Many Thanks,


It's hard to say exactly given the lack of data, but the spikes are commonly observed on transaction log checkpoint.

A checkpoint is a procedure of applying data sequentially appended and stored in the transaction log to the actual datafiles.

This involves lots of I/O, including CPU operations, and may be a reason of the CPU activity spikes.

Normally, a checkpoint occurs when a transaction log is 70% full or when SQL Server decides that a recovery procedure (reapplying the log) would take longer than 1 minute.


Your first priority should be to address the transaction log size. Is the DB being backed up correctly, and how frequently. Address theses issues and then see if the CPU spikes go away. CHECKPOINT is the process of reading your transaction log and applying the changes to the DB file, if the transaction log is HUGE then it makes sense it could affect it?


You could try extending the autogrowth: Kimberley Tripp suggests upwards of 500MB autogrowth for transaction logs measured in GBs:

http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx

(see point 7)


While I wouldn't be surprised if having a log that size wasn't causing a problem, there are other things it could be as well. Have the statistics been updated lately? Are the spikes happening when some automated job is running, is there a clear time pattern to when you have the spikes - then look at what else is running? Did you load a new version of anything on the server about the time the spikes started happeining?

In any event, the transaction log needs to be fixed. The reason it is so large is that it is not being backed up (or not backed up frequently enough). It is not enough to back up the database, you must also back up the log. We back ours up every 15 minutes but ours is a highly transactional system and we cannot afford to lose data.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜