开发者

Does a transaction block reduce performance in SQL Server?

Now me and a coworker are arguing about the effect of non-vital BEGIN TRAN....COMMIT TRAN blocks. I've written about 140 Stored Procedures for simple insert-update-delete operations开发者_如何学运维 and since we may later need to do some extra operations in them, I've already included the might-be-necessary BEGIN TRAN and COMMIT TRAN blocks like so:

CREATE PROCEDURE [Users].[Login_Insert]

        @Username           nvarchar (50) OUTPUT,
        @Password           char (40),
        @FullName           nvarchar (150),
        @LoginTypeId        int

AS

SET NOCOUNT ON;

BEGIN TRY
BEGIN TRAN

INSERT [Users].[Login]
(
        [Username],
        [Password],
        [FullName],
        [LoginTypeId]
)
VALUES
(
        @Username,
        @Password,
        @FullName,
        @LoginTypeId
)

COMMIT TRAN
RETURN 1
END TRY

BEGIN CATCH
ROLLBACK TRAN

RETURN -1
END CATCH

GO

Now many of these transactions may never be necessary. Are these extraneous blocks going to affect the performance in a noticeable manner? Thanks in advance.


Not enough to notice.

That is, each TXN will be open for an extra OhNoSecond between BEGIN TRAN and INSERT. I'd be impressed if anyone could measure it.

However, if you did BEGIN TRAN then prompted for user input, your legs need breaking...

Good idea though: I do this so all my write procs are 100% consistent, have same error handling, can be nested etc

Edit: After Remus' answer, I see I didn't link to my nest TXN template: Nested stored procedures containing TRY CATCH ROLLBACK pattern? This is different to Remus' in that it always rolls back and has not SAVEPOINTs

Edit, a quick and dirty test shows it's quicker around 2/3 of the time with the transaction

SET NOCOUNT ON
SET STATISTICS IO OFF

DECLARE @date DATETIME2
DECLARE @noTran INT
DECLARE @withTran INT

SET @noTran = 0
SET @withTran = 0

DECLARE @t TABLE (ColA INT)
INSERT @t VALUES (1)

DECLARE 
  @count INT,
  @value INT

SET @count = 1

WHILE @count < 100
BEGIN

  SET @date = GETDATE()
  UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1
  SET @noTran = @noTran + DATEDIFF(MICROSECOND, @date, GETDATE())

  SET @date = GETDATE()
  BEGIN TRAN
  UPDATE smalltable SET smalltablename = CASE smalltablename WHEN 'test1' THEN 'test' ELSE 'test2' END WHERE smalltableid = 1
  COMMIT TRAN
  SET @withTran = @withTran + DATEDIFF(MICROSECOND, @date, GETDATE())

  SET @count = @count + 1
END

SELECT 
  @noTran / 1000000. AS Seconds_NoTransaction, 
  @withTran / 1000000. AS Seconds_WithTransaction

Seconds_NoTransaction    Seconds_WithTransaction
2.63200000               2.70400000
2.16700000               2.12300000

Reversing the order of update keeps the same behaviour


In the code you posted there will be no measurable effect, but transactions do have effect on performance, they can dramatically improve performance due to log flush commit grouping or they can dramatically reduce performance due to incorrectly managed contention issues. But the bottom line is that when transactions are needed for correctness you cannot skip having them. That being said, your template is actually quite bad vis-a-vis transactions and try-catch blocks. Transcation in a catch block must have a tri-state logic check for XACT_STATE return values (-1, 0, 1) and properly handle doomed transactions. See Exception handling and nested transactions for an example.

also, you should never ever mix try-catch error handling with return code error handling. Pick one and stick with it, preferably try-catch. In other words, your stored procedure should RAISE, not return -1. Mixing exception with error codes makes your code a nightmare to maintain and properly call.


TL;DR - Stored Procdure containing two select queries dealing with 33 million records took me 45 seconds to execute without a transaction, 48 seconds with.

Disclaimer: I wrote a stored procedure for about 4 hours and came across a somewhat measurable answer to this question (NOTE: It's not that significant!) Gaps in query logic are intentionally omitted due to the sensitivity of the data I was working with.

Methodology: This procedure was developed using two queries - one doing most of the heavy lifting, and the other calculating one additional field on its own so it doesn't try to calculate a field more than it needs to. I've broken it down into two steps:

1) I wrote 2 Common Table Expressions with 1 SQL SELECT into a Temporary Table, then Queried it again. I had to do this because the requirements asked of me were to implement a couple scalar valued functions that would have otherwise attempted to run the function on over 33 million records instead of 355.

2) I attached a scalar valued function AFTER the first query so it didn't try to look in the 30 million records (It made a huge difference, if you care).

Query: For readership purposes, I've cut out a large part of the query (The case statement).

CREATE PROC GET_PAYMENT_SUMS_BY_CLIENT
AS

--Required for reporting in a specific Business Intelligence later; Optional
SET FMTONLY OFF; 

BEGIN TRANSACTION

--Query 1
--This CTE checks over 30 million records
WITH CTE1 AS(
SELECT CASE VARIABLE
--170 case conditions go here
END AS TheType, 
Amount,
PK1 FROM TABLE1),

--THIS CTE Pivots the sums to get the data in the style I want it in
CTE2 AS(
SELECT PK1, [PIVOT1], [PIVOT2], [PIVOT3]
FROM
  (SELECT * FROM CTE1) AS BaseTable --Alias was just to get it to execute
)
PIVOT(
  SUM(Amount)
FOR TheType IN ([PIVOT1], [PIVOT2], [PIVOT3])
        ) AS PivotTable
    )
 )

SELECT TABLE2.NAME, CTE2.* INTO #TEMPORARY_TABLE 
FROM CTE2 
JOIN TABLE2 ON CTE2.PK1 = TABLE2.PK2

--Query 2
--Written to force the function to look at 355 records instead of 33 million
SELECT *, dbo.SCALAR_VALUED_FUNCTION(PK2) FROM #TEMPORARY_TABLE

COMMIT TRANSACTION

Findings:
With Transaction - If the Transaction logic is used, the resulting query from this set takes 48 seconds to process more than 33 million records in a case statement containing 170 lines, pivot the data by sum, place the data into a temporary table, and attach a scalar valued function AFTER the first query has run.

Without Transaction - If the commented lines in the code are left commented, all of the aforementioned steps are accomplished in 45 seconds. This is about 7% faster than with a Transaction block: 3/45 = 0.0666.... ~ 7% faster.

Conclusion: While my efforts cannot tell you if doing the same query for 10 records will yield the same proportion of difference, it can tell you that it begins to matter more when you begin to larger data sets and/or more complicated queries.

I have this information has served a purpose for someone out there!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜