开发者

Can a Snapshot transaction fail and only partially commit in a TransactionScope?

Greetings

I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some datab开发者_StackOverflow中文版ase code in c# that looks something like this:

using(var tran = MyDataLayer.Transaction())
{
 MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2));
 MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc);

 tran.Commit();
}

I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created.

So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?


We have spotted the same issue. I have recreated it here - https://github.com/DavidBetteridge/MSMQStressTest

For us we see the issue when reading from the queue rather than writing to it. Our solution was to change the isolation level of the first read in the subscriber to be serialised.


no, but snapshot isolation level isn't the same as serializable. snapshoted rows are stored in the tempdb until the row commits. so some other transaction can read the old data just fine.

at least that's how i understood your problem. if not please provide more info like a grapf of the timeline or something similar.


Can you verify that CallSomethingThatEventuallyDoesLinqToSQL is using the same Connection as the first call? Does the second call read data that the first filed into the db... and if it is unable to "see" that data would cause the second to skip a few steps and not do it's job?

Just because you have it wrapped in a .NET transaction doesn't mean the data as seen in the db is the same between connections. You could for instance have connections to two different databases and want to rollback both if one failed, or file data to a DB and post a message to MSMQ... if MSMQ operation failed it would roll back the DB operation too. .NET transaction would take care of this multi-technology feature for you.

I do remember a problem in early versions of ADO.NET (maybe 3.0) where the pooled connection code would allocate a new db connection rather than use the current one when a .NET level TransactionScope was used. I believe it was fully implemented with 3.5 (I may have my versions wrong.. might be 3.5 and 3.5.1). It could also be caused by the MyDataLayer and how it allocates connections.

Use SQL Profiler to trace these operations and make sure the work is being done on the same spid.


It sounds like your connection may not be enlisted in the transaction. When do you create your connectiion object? If it occurs before the TransactionScope then it will not be enlisted in the transaction.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜