开发者

Using TransactionScope multiple times

Is this the correct way to use a transaction scope:

I have an object which represents part of a thing:

public class ThingPart
{
    private DbProviderFactory connectionFactory;

    public void SavePart()
    {
        using (TransactionScope ts = new TransactionScope()
        {
            ///save the bits I want to be done in a single transaction
            SavePartA();
            SavePartB();
            ts.Complete(); 
        }
    }

    private void SavePartA()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }

    private void SavePartB()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }
}

And something which represents the Thing:

public class Thing
{
    private DbProviderFactory connectionFactory;

    public void SaveThing()
    {
        using (TransactionScope ts = new TransactionScope()
        {
            ///save the bits I want to be done in a single transaction
            SaveHeader();
            foreach (ThingPart part in parts)
            {
                part.SavePart();
            }  
            ts.Complete();    
        }
    }

    private void SaveHeader()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }
}

I also have something which manages many things

public class ThingManager
{    
    public void SaveThings
    {        
        using (TransactionScope ts = new TransactionScope)
        {            
            foreac开发者_Python百科h (Thing thing in things)
            {
                thing.SaveThing();
            }            
        }        
    }    
}

its my understanding that:

  • The connections will not be new and will be reused from the pool each time (assuming DbProvider supports connection pooling and it is enabled)
  • The transactions will be such that if I just called ThingPart.SavePart (from outside the context of any other class) then part A and B would either both be saved or neither would be.
  • If I call Thing.Save (from outside the context of any other class) then the Header and all the parts will be all saved or non will be, ie everything will happen in the same transaction
  • If I call ThingManager.SaveThings then all my things will be saved or none will be, ie everything will happen in the same transaction.
  • If I change the DbProviderFactory implementation that is used, it shouldn't make a difference

Are my assumptions correct?

Ignore anything about object structure or responsibilities for persistence this is an example to help me understand how I should be doing things. Partly because it seems not to work when I try and replace oracle with SqlLite as the db provider factory, and I'm wondering where I should spend time investigating.


Answering your bullets (and I've assumed Microsoft SQL Server 2005 or later):

  1. Connections will not be new and reused from the pool

    • This depends - e.g. the SAME connection will be reused for successive steps in your aggregate transaction if all connections are to the same DB, with the same credentials, and if SQL is able to use the Lightweight transaction manager (SQL 2005 and later). (but SQL Connection pooling still works if that was what you were asking?)
  2. Atomic SavePart - yes, this will work ACID as expected.
  3. Yes nesting TransactionScopes with the same scope will also be atomic. Transaction will only commit when the outermost TS is Completed.
  4. Yes , also atomic, but note that you will be escalating SQL locks. If it makes sense to commit each Thing (and its ThingParts) individually, this would be preferable from a SQL concurrency point of view.
  5. The Provider will need to be compatable as a TransactionScope resource manager (and probably DTC Compliant as well). e.g. don't move your database to Rocket U2 and expect TransactionScopes to work.

Just one gotcha - new TransactionScope() defaults to isolation level READ_SERIALIZABLE - this is often over pessimistic for most scenarios - READ COMMITTED is usually more applicable.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜