开发者

Bad Form to use try/catch as a test?

I get the feeling that this is, but I wanted to confirm- is it bad form to do something like:

try
{
    SqlUpload(table);
}
catch(PrimaryKeyException pke)
{   
          DeleteDuplicatesInTable(table);
          SqlUpload(table);
}
catch(Exception ex)
{
    Console.Write(ex);
}

Is it better to do this to potentially save on efficiency in case the table doesn't have dupl开发者_如何学Goicates, or is it better to run the delete duplicates bit anyway? (I'm assuming conditions where the table will upload as long as there are no duplicates within the table itself). Also, given how try-catch statements impact performance, would it even be faster to do it this way at all?

I apologize for the crude nature of this example, but it was just to illustrate a point.


Exceptions can be used correctly in transaction management, but it is not the case in this example. On my first look, it appeared that this seemed similar to what Linq2Sql's DataContext class does in its call to SubmitChanges(). However, that analogy was incorrect. (Please see at Chris Marisic's comment to my post for an accurate criticism of the comparison).

On exceptions

In general, if there is some issue that is likely to be encountered, you should check for it first. Exceptions should be used when a response is truly "exceptional" (meaning it is unexpected in the context of proper usage). If the proper usage of a function within a completely valid context throws an exception , then you are probably using exceptions incorrectly.

Excerpt from DataContext.SubmitChanges

This code shows an example of the correct usage of exceptions in transaction management.

Note: Just because Microsoft does it, doesn't automatically mean its right. However, their code does have a pretty good track record.

      DbTransaction transaction = null;
        try
        {
            try
            {
                if (this.provider.Connection.State == ConnectionState.Open)
                {
                    this.provider.ClearConnection();
                }
                if (this.provider.Connection.State == ConnectionState.Closed)
                {
                    this.provider.Connection.Open();
                    flag = true;
                }
                transaction = this.provider.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
                this.provider.Transaction = transaction;
                new ChangeProcessor(this.services, this).SubmitChanges(failureMode);
                this.AcceptChanges();
                this.provider.ClearConnection();
                transaction.Commit();
            }
            catch
            {
                if (transaction != null)
                {
                    try
                    {
                        transaction.Rollback();
                    }
                    catch
                    {
                    }
                }
                throw;
            }
            return;
        }
        finally
        {
            this.provider.Transaction = null;
            if (flag)
            {
                this.provider.Connection.Close();
            }
        }
    }


Yes, this type of code is considered bad form in .NET.

You would be better off writing either code similar to

if(HasPrimaryKeyViolations(table))
    DeletePrimaryKeyViolations(table)

SqlUpload(table)


By reading this code I would assume that primary key violations are an exceptional case and not expected - if they are I think you should remove the duplicates beforehand, do not rely on exception handling for an expected case.


In all common languages/interpreters/compilers, exception handling is implemented to have a minimal performance impact when an exception isn't raised -- under the hood, adding an exception handler is usually just pushing a single value onto a stack or something similar. Just adding a try block wont usually have a performance impact.

On the other hand, when an exception is actually raised, things can get very slow very fast. Its the trade-off for being able to add the try block without worrying about worrying, and its usually seen as acceptable, because you only take the performance hit if something unexpected has already gone wrong somewhere else.

So, in theory, if there is a condition that you expect to happen, use an if instead. Its semantically better because it expresses to the reader that the bad condition is probably going to happen from time to time (e.g., user types in some invalid input), while the try expresses something that you hope never happens (the data source is corrupt). As above, its also going to be easier on performance.

Of course, rules are defined by their exceptions (no pun intended). In practice, there are two situations where this becomes the wrong answer:

First, if you are performing a complex operation like parsing a file, and its and all-or-nothing -- if one field in the file is corrupt, you want to bail on the whole operation. Exceptions allow you to jump out of the whole complex process up to an exception handler encapsulating the entire parse. Sure, you could litter the parsing code with checks and return values and checks on the return values -- but its going to be a lot cleaner just to throw the exception and let it rise up to the top of the operation. Even if you expect that the input is going to be bad sometimes, if there isn't a reasonable way to handle the error exactly at the point where the error occurs, use exceptions to let the error rise up to a more appropriate place to handle it. Thats really what exceptions were for in the first place -- getting rid of all that error handling code down in the details, and moving it to one, consolidated, reasonable place.

Second, a library might not let you make the choice. For example, int.TryParse is a good alternative to int.Parse if the input hasn't already been vetted. On the other hand, a library might not have a non-exception-throwing option. In that case, don't brew up your own code to check without exceptions -- though it might be bad form to use exception handling to check for an expected condition, its worse form to duplicate the functionality of the library, just to avoid an exception. Just use the try/catch, and maybe add a snide comment about how you didn't WANT to do it, but the library authors MADE you :).

For your particular example, probably stick with the exception. While exception handling isn't considered 'fast,' its still faster than a round trip to the database; and there isn't going to be a reasonable way to check for that exception without sending the command anyways. Plus, hitting a database in interfacing with an external system -- that in itself is a pretty to good reason to expect the unexpected -- exception handling almost always makes sense when you are leaving your particular controlled environment.

Or, more specifically to your example, you may consider using a stored procedure with a MERGE statement, to use the source data in table to update or insert as appropriate; the update will be a little friendlier on all fronts than doing a delete-then-insert for existing keys.


try catches are expensive on performance, so don't use them as control structures. Instead use triggers at the database level.


One problem is that any exception caused by call of SqlUpload() in the catch block causes the application to crash, because there is no further exception handling.


You'll probably get a few different opinions on this, but mine is that try...catch should be used for things that shouldn't normally happen (although sometimes it's unavoidable). So by that rule, you should ask if the duplicates are in the table by normal execution of the program or if they should not exist given allowable execution of the program.

Just to clarify, I'm saying "normal" usage of the program, not "correct" usage (e.g. when I test it the duplicates don't appear (correct usage) but when the customer uses it they do (perhaps incorrect but normal), so I need to get rid of them). It's more that the duplicates would only appear in a way that the program cannot control (e.g. sometimes someone goes into the database and adds a duplicate row (hopefully not a normal situation)).

Something like duplicate rows, though, is likely a symptom of some other bug, so you shouldn't mask it but try and find the root cause so that the deletion isn't necessary.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜