开发者

Database durability vs performance

I have studied a lot how durability is achieved in databases and if I understand well it works like this (simplified):

Clent's point of view:

  1. start transaction.
  2. insert into table values...
  3. commit transaction

DB engine point of view:

  1. write transaction start indicator to log file
  2. write changes done by client to log file
  3. write transaction commit indicator to log file
  4. flush log file to HDD (this ensures durability of data)
  5. return 'OK' to client

What I observed:

Client application is single thread application (one db connection). I'm able to perform 400 transactions/sec, while simple tests that writes something to file and then fsync this file to HDD performs only 150 syncs/sec. If client were multithread/multi connection I would imagine that DB eng开发者_开发知识库ine groups transactions and does one fsync per few transactions, but this is not the case.

My question is if, for example MsSQL, really synchronizes log file (fsync, FlushFileBuffers, etc...) on every transaction commit, or is it some other kind of magic behind?


The short answer is that, for a transaction to be durable, the log file has to be written to stable storage before changes to the database are written to disk.

Stable storage is more complicated than you might think. Disks, for example, are not usually considered to be stable storage. (Not by people who write code for transactional database engines, anyway.)

It see how a particular open source dbms writes to stable storage, you'll need to read the source code. PostgreSQL source code is online. (File is xlog.c) Don't know about MySQL source.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜