开发者

Question on checkpoints during large data loads?

This is a question about how PostgreSQL works. During large data loads using the 'COPY' command, I see multiple checkpoints occur where 100% of the log files (checkpoint_segments) are recycled.

I don't understand this, I guess. What does pgsql do when a single transaction requires more space than available log f开发者_如何学Goiles? It seems that it is wrapping around multiple times in the course of this load which is a single transaction. What am I missing?

Everything is working, I just want to understand it better in case I can tune things, etc.


When a checkpoint happens all dirty pages are written to disk. As these pages cannot get lost anymore it doesn't need the log for them any more so it is save to recycle. Writing dirty pages to disk doesn't mean this data is committed. The db can see from the meta data stored with each row that it belongs to a transaction that has not commited yet and it also can still abort this transaction in which case vacuum will eventually cleanup these rows.

When loading large amounts of data it is adviced to temporarily increase checkpoint_segments.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜