开发者

NpgSQLdataReader GetOrdinal throwing exceptions.. any way around?

I built a wrapper around NpgSQL for a bunch of the methods I usually use in my projects' DAL. Two of them, I usually use to fill DTOs straight from a DataReader. Usually in a fill helper method, i'll instanciate the DTO and iterate through the properties mapping the Datareader's data to the corresponding property. The fill method is generated most of the time.

Since i allow many of the properties to be null or use the DTO's default values, I've used a method to check if the dataReader's data is valid for the property before filling in the prperty. So i'll have a IsValidString("fieldname") and a DRGetString("fieldname") methods, like so:

public bool IsValidString(string fieldName)
{
        if (data.GetOrdinal(fieldName) != -1
            && !data.IsDBNull(data.GetOrdinal(fieldName)))
            return true;
        else
            return false;
}

public string DRGetString(string fieldName)
{
        return data.GetString(data.GetOrdinal(fieldName));
}

My fill method is delagated to whatever method executed the query and looks like:

public static object FillObject(DataParse<PostgreSQLDBDataParse> dataParser)
{
     TipoFase obj = new TipoFase();   

     if (dataParser.IsValidInt32("T_TipoFase"))
        obj.T_TipoFase = dataParser.DRGetInt32("T_TipoFase");

     if (dataParser.IsValidString("NM_TipoFase"))
        obj.NM_TipoFase = dataParser.DRGetString("NM_TipoFase");

            //...rest of the properties .. this is usually autogenerated by a T4 template

     return obj;
}

This was working fine and dandy in NpgSQL pre 2.02. . When the GetOrdinal method was called, and if the field was inexistent in the dataReader, I'd simply get a -1 returned. Easy to return false in IsValidString() and simply skip to the next property. The performace hit from checking inexistent fields was practically neglectable.

Unfortunately, changes to NpgSQL make GetOrdinal throw an exception when the field 开发者_Python百科doesn't exist. I have a simple workaround in which I wrap the code in a try /catch and throw false within the catch. But I can feel the hit in performance, especially when I go in to debug mode. Filling in a long list takes minutes.

Suposedly, NpgSQL has a parameter that can be added to the connection string (Compatability) to support backward compatabilty for this method, but I've never got that to work correctly (I always get an exception because of a mal formed connection string). Anyway, I'm looking for suggestions for better workarounds. Any better way to fill in the object from the datareader or even somehow work around the exception problem?


I have created a solution to my problem, that doesn't require great changes, and presents interesting performance (or so it seems). Might just be a new parsing library / wrapper.

Basicly, I'll iterate through the dataReader's fields, and copy each to a Collection (in my case a List). Then I'll check for valid data and if considered valid, I'll copy the data to the object's property.

So I'll have:

public class ParserFields
{
    public string FieldName { get; set; }
    public Type FieldType { get; set; }
    public object Data { get; set; }
}

and I'll fill the object using:

public static object FillObjectHashed(DataParse<PostgreSQLDBDataParse> dataParser)
    {
        //The the Field list with field type and data
        List<ParserFields> pflist = dataParser.GetReaderFieldList(); 

        //create resulting object instance
        CandidatoExtendido obj = new CandidatoExtendido();

        //check for existing field and valid data and create object
        ParserFields pfdt = pflist.Find(objt => objt.FieldName == "NS_Candidato");
        if (pfdt != null && pfdt.FieldType == typeof(int) && pfdt.Data.ToString() != String.Empty)
            obj.NS_Candidato = (int)pfdt.Data;

        pfdt = pflist.Find(objt => objt.FieldName == "NM_Candidato");
        if (pfdt != null && pfdt.FieldType == typeof(string) && pfdt.Data.ToString() != String.Empty)
            obj.NM_Candidato = (string)pfdt.Data;

        pfdt = pflist.Find(objt => objt.FieldName == "Z_Nasc");
        if (pfdt != null && pfdt.FieldType == typeof(DateTime) && pfdt.Data.ToString() != String.Empty)
            obj.Z_Nasc = (DateTime)pfdt.Data;

        //...

        return obj;
    }

I timed my variations, to see the diferences. Did a search that returned 612 results. First I queried the database twice too take in to account the first run of the query and the subsequent diferences related to caching ( and that where quite significant). My FillObject method simply created a new instance of the desired object to be added to the results list.

  • 1st query to List of object's instances : 2896K ticks
  • 2nd query (same as first) : 1141K ticks

Then I tried using the previous fill objects

  • To List of desired object, filled with return data or defaults, checking all of the objects properties: 3323K ticks
  • To List of desired objects, checking only the object's properties returned in the search: 1127K ticks
  • To list of desired objects, using lookup list, checking only the returned fields: 1097K ticks
  • To list of desired objects, using lookup list, checking all of the fields (minus a few nested properties): 1107K ticks

The original code i was using was consuming nearly 3 times more ticks than when using a method limited to the desired fields. The excpetions where killing it.

With the new code for the fillobject method, the overhead for checking inexistente fileds mas minimal compared to just checking for the desired fields.

This seems to work nice, for now at least. Might try looking for a couple of optimizations. Any sugestion will be appreciated!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜