Application safe to use READ_COMMITTED_SNAPSHOT?
I have a large web application using a COM 开发者_高级运维data access layer against an SQL Server 2005 database. By default, the isolation level is READ_COMMITTED. Now I understand how READ_COMMITTED_SNAPSHOT isolation level works, and reading MSDN it says you can turn it on transparently. However, I'm still sceptical. :) Is it guaranteed, in an implementational manner, that my application will not break (do not assume the application does everything by the book) if I change from READ_COMMITTED to READ_COMMITTED_SNAPSHOT? No additional exceptions will be thrown to the COM layer? Transaction semantics are the same?
PS. By an implementational manner, I mean something along the lines that the READ_COMMITTED_SNAPSHOT isolation level was implemented intentionaly to work exactly as READ_COMMITTED, just using row versioning instead of locks?
Thank you for any insights or your own experiences switching to this isolation mode.
No, they do not have the same behaviour: READ_COMMITTED guarantees there are no dirty reads by locking, READ_COMMITTED_SNAPSHOT does so by taking a snapshot of the tables.
With READ_COMMITTED_SNAPSHOT your transactions could read old data which has been changed by another session running in parallel with your transaction.
While is most cases this won't break application logic, it's not guaranteed that your particular application does not relies on the locking behaviour.
The only way to safely change the isolation level is to audit all database code and check if there could be issues.
精彩评论