开发者

Even work distribution algorithm

A quick question about work balancing.

Program processing files in parallel. Lets say size of the file is开发者_如何学编程 an approximated measure of how long it will take to process it. All files are known in advance.

We have N nodes that can work on files. How to distribute these files so that each node will have closest to avg amount of work.

Idea is pretty trivial and I have couple of ideas, but it really seems like some classical problem with best solution already in existence.

I just don't know what it called.

Someone knows this ?

Thanks!

EDIT: Ok, sorry, I omitted a lot of information. I am working on MPI implementation. Standard master-slave system. One master node examines target directory, picking up files that needs to be processed, and then assign files to slaves MPI tasks so they can do their part in parallel.

Amount of slave nodes is less than 32.

Amount of target files less than 10000.


You are asking about the classic mulitprocessor scheduling problem. The wikipedia article is a good start for a basic overview of an algorithm (http://en.wikipedia.org/wiki/Multiprocessor_scheduling).


Here's a thought. Sort the (filename, size) pairs in descending order. Then, starting with the biggest file, assign each file to the node which has the lowest cumulative weight of files (break ties however you like). The "one for me, one for you" approach.

Takes O(MlogM) to sort M file records and O(M*N) to distribute (somebody double check this), but I think the algorithm gives good - optimal? - results.

Edit: after checking out the link provided by the other poster, it turns out this is the LPT approach, which is in P but which isn't optimal in terms of getting the average size as close as possible.


I've been experimenting with parallelising reduction functions using recursive divide & conquer algorithms and have settled on having the number of jobs submitted to nodes satisfy the inequality

l >= n / P^2

where l is the number of jobs, n the size of the original workload and P the number of 'nodes' , workers, processors, whatever you want to call them.

For situations on most computers & mobile devices n will be many orders of magnitude greater than P. You want to make sure that the number of individual jobs isn't so large that you spend all your time dividing them up and sending them to workers.


If all work units lengths are known (estimatable) in advance, this basically becomes the bin packing problem (https://en.wikipedia.org/wiki/Bin_packing_problem). This is solvable heuristically with "first fit" and "best fit" algorithms (see the link).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜