开发者

What is the correct PLINQ syntax to convert this foreach loop to parallel execution?

Update 2011-05-20 12:49AM: The foreach is still 25% faster than the parallel solution for my application. And don't use the collection count for max parallelism, use somthing closer to the number of cores on your machine.

=

I have an IO bound task that I would like to run in parallel. I want to apply the same operation to every file in a folder. Internally, the operation results in a Dispatcher.Invoke that adds the computed file info to a collection on the UI thread. So, in a sense, the work result is a side effect of the method call, not a value returned directly from the method call.

This is the core loop that I want to run in parallel

foreach (ShellObject sf in sfcoll)
    ProcessShellObject(sf, curExeName);

The context for this loop is here:

        var curExeName = Path.GetFileName(Assembly.GetEntryAssembly().Location);
        using (ShellFileSystemFolder sfcoll = ShellFileSystemFolder.FromFolderPath(_rootPath))
        {
            //This works, but is not parallel.
            foreach (ShellObject sf in sfcoll)
                ProcessShellObject(sf, curExeName);

            //This doesn't work.
            //My attempt at PLINQ.  This code never calls method ProcessShellObject.

            var query = from sf in sfcoll.AsParallel().WithDegreeOfParallelism(sfcoll.Count())
                        let p = ProcessShellObject(sf, curExeName)
                        select p;
        }

    private String ProcessShellObject(ShellObject sf, string curExeName)
    {
        String unusedReturnValueName = sf.ParsingName
        try
        {
            DesktopItem di = new DesktopItem(sf);
            //Up date DesktopItem stuff
            di.PropertyChanged += new PropertyChangedEventHandler(DesktopItem_PropertyChanged);
            Contr开发者_开发百科olWindowHelper.MainWindow.Dispatcher.Invoke(
                (Action)(() => _desktopItemCollection.Add(di)));
        }
        catch (Exception ex)
        {
        }
        return unusedReturnValueName ;
    }

Thanks for any help!

+tom


EDIT: Regarding the update to your question. I hadn't spotted that the task was IO-bound - and presumably all the files are from a single (traditional?) disk. Yes, that would go slower - because you're introducing contention in a non-parallelizable resource, forcing the disk to seek all over the place.

IO-bound tasks can still be parallelized effectively sometimes - but it depends on whether the resource itself is parallelizable. For example, an SSD (which has much smaller seek times) may completely change the characteristics you're seeing - or if you're fetching over the network from several individually-slow servers, you could be IO-bound but not on a single channel.


You've created a query, but never used it. The simplest way of forcing everything to be used with the query would be to use Count() or ToList(), or something similar. However, a better approach would be to use Parallel.ForEach:

var options = new ParallelOptions { MaxDegreeOfParallelism = sfcoll.Count() };
Parallel.ForEach(sfcoll, options, sf => ProcessShellObject(sf, curExeName));

I'm not sure that setting the max degree of parallelism like that is the right approach though. It may work, but I'm not sure. A different way of approaching this would be to start all the operations as tasks, specifying TaskCreationOptions.LongRunning.


Your query object created via LINQ is an IEnumerable. It gets evaluated only if you enumerate it (eg. via foreach loop):

        var query = from sf in sfcoll.AsParallel().WithDegreeOfParallelism(sfcoll.Count())
                    let p = ProcessShellObject(sf, curExeName)
                    select p;
        foreach(var q in query) 
        {
            // ....
        }
        // or:
        var results = query.ToArray(); // also enumerates query


Should you add a line in the end

var results = query.ToList();
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜