开发者

Refactoring - Speed increase

How can I make this function more efficient. It's currently running at 6 - 45 seconds. I've ran dotTrace profiler on this specific method, and it's total time is anywhere between 6,000ms to 45,000ms. The majority of the time is spent on the "MoveNext" and "GetEnumerator" calls.

and example of the times are

71.55% CreateTableFromReportDataColumns - 18, 533开发者_如何学C* ms - 190 calls
 -- 55.71% MoveNext - 14,422ms - 10,775 calls 

can I do to speed this method up? it gets called a lot, and the seconds add up:

    private static DataTable CreateTableFromReportDataColumns(Report report)
    {
        DataTable table = new DataTable();
        HashSet<String> colsToAdd = new HashSet<String> { "DataStream" };
        foreach (ReportData reportData in report.ReportDatas)
        {
            IEnumerable<string> cols = reportData.ReportDataColumns.Where(c => !String.IsNullOrEmpty(c.Name)).Select(x => x.Name).Distinct();

            foreach (var s in cols)
            {
                if (!String.IsNullOrEmpty(s))
                    colsToAdd.Add(s);
            }
        }

        foreach (string col in colsToAdd)
        {
            table.Columns.Add(col);
        }

        return table;
    }

If you need the sql table definitions here they are:

ReportData

ReportID            int

ReportDataColumn

ReportDataColumnId  int
ReportDataId        int 
Name                varchar(255)    
Value               text    


I believe you should be able to simplify your function into something like this

var columnsToAdd = report.ReportDatas
                    .SelectMany(r => r.ReportDataColumns)
                    .Select(rdc => rdc.Name)
                    .Distinct()
                    .Where(name => !string.IsNullOrEmpty(name));

And from there add the names to your table.


Your code (only) runs foreach loops so the conclusion that the method spends most of its time in MoveNext() et al is not so surprising.

You are doing double work on both the isnullOrEmpty and the Distinct (is repeated by the HashSet).

My version would be:

private static DataTable CreateTableFromReportDataColumns(Report report)
{
    DataTable table = new DataTable();
    HashSet<String> colsToAdd = new HashSet<String> { "DataStream" };
    foreach (ReportData reportData in report.ReportDatas)
    {

        foreach (var column in reportData.ReportDataColumns)
        {
            if (!String.IsNullOrEmpty(column.Name))
                colsToAdd.Add(column.Name);
        }
    }

    foreach (string col in colsToAdd)
    {
        table.Columns.Add(col);
    }

    return table;
}

But I don't expect a huge improvement


You should have mentioned LinqToSql when you asked the question, then you would have gotten some responses to look into your database to see if it's a long running query or repeated round trip querying

private static DataTable CreateTableFromReportDataColumns(Report report) 
{ 
    DataTable table = new DataTable(); 
    table.Columns.Add("DataStream");
    IEnumerable<string> moreColumns = report.ReportDatas
      .SelectMany(z => z.ReportDataColumns)
      .Select(x => x.Name)
      .Where(s => s != null && s != "")
      .Distinct();

    foreach (string col in moreColumns) 
    { 
        table.Columns.Add(col); 
    } 

    return table; 
} 

Also, capture the query issued by using the sql profiler. Then analyze the IO and TIME of the query by running it with these statements before

SET STATISTICS TIME ON
SET STATISTICS IO ON
  --your query here

Lastly, you may need an index or two to bring the IO down. Column order is important here.

CREATE INDEX IX1_ReportData ON ReportData(ReportID, Id)
CREATE INDEX IX1_ReportDataColumn ON ReportDataColumn(ReportDataId, Name)


This might be a slight improvement on Hank's code. It takes advantage of the fact that the HashSet will tell you if the Add operation was successful or the element already existed.

private static DataTable CreateTableFromReportDataColumns(Report report)
{
    HashSet<string> uniqueNames = new HashSet<string> { null, "", "DataStream" };

    DataTable table = new DataTable();
    table.Columns.Add("DataStream");

    foreach (ReportData reportData in report.ReportDatas)
    {
        foreach (var dataColumn in reportData.ReportDataColumns)
        {
            if (uniqueNames.Add(dataColumn.Name))
            {
                table.Columns.Add(dataColumn.Name);
            }
        }
    }

    return table;
}

Edit: I went ahead and added null and "" to the hash set at the beginning, so we no longer need the check for null or empty.


  • the string.isnullorempty check is repeated
  • you can get rid of the foreach's by doing SelectMany (i see Anthony just posted the same :)
  • to keep the same semantics for the "DataStream" column (after switching to Anthony's version), you could do new HashSet(columnsToAdd) { "DataStream" } but it might be easier/faster to just add (via concat or union or whatever) the "DataStream" string and then Distinct() the result and avoid the HashSet creation (might as well profile both)

It might be overkill for this (depending on the number of entries in ReportDatas, number of columns in each ReportDataColumns, number of cores on host machine, etc) but you could also potentially parallelize.

  • http://msdn.microsoft.com/en-us/magazine/cc163329.aspx
  • Link

If you decided to process the ReportDatas entries in parallel, for instance, you could have each one either create its own collection of columns or have them all write into a ConcurrentBag that you Distinct when it's all done or whatever.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜