Best sort for multi threaded application
Today in a interview I have got the question asking which sort you use for multi threaded 开发者_如何学Capplication.Weather it is a merge sort or quick sort.
You use merge sort for multi-threaded applications.
The reason:
Merge sort divides the problem into separate smaller problems (smaller arrays) and then merges them. That can be done in separate threads.
Quick sort does a pivot sort on a single array, so it's harder to divide the problem efficiently between threads.
Every divide and conquer algorithm can be quite easily parallelised. Merge sort and quicksort both follow the same basic schema which can be run in parallel:
procedure DivideAndConquer(X)
if X is a base case then
Process base case X
return
Divide X into [Y0 … Yn[
for Y ∈ [Y0 … Yn[ in parallel do
DivideAndConquer(Y)
Merge [Y0 … Yn[ back into X
Where they differ is that in quicksort, the division is difficult and merging is trivial (no operation). In merge sort, it’s the other way round: dividing is trivial and merging is difficult.
If you implement the above schema, quicksort is actually easier to parallelise because you can just forget about the merge step. For merge sort, you need to keep track of finished parallel tasks. This screws up the load balancing.
On the other hand, if you follow the above schema, you’ve got a problem: the very first division, and the very last merging, will only use a single processor and all other processors will be idle. Thus it makes sense to parallelise these operations as well. And here we see that parallelising the partitioning step in quicksort is much harder than parallelising the merge step in merge sort.
A merge sort seems like it would be easier to parallelize and distribute...think about it, you're breaking it up into clean sub problems that can easily be divided and distributed. But then again, the same is true of quicksort. However, I would probably prefer doing it with merge sort as it would likely be easier.
Assuming a decent pivot selection, it's not all that different.
Subproblems are trivial to parallelize; they use (mostly) disjoint memory and need no synchronization, so the actual difference lies in the bottlenecks: the initial partition of quick-sort vs. the final merge in merge-sort. Neglecting to parallelize these will result in bad speedups for many cores or few elements (This gets noticeable a lot faster than you might think!).
Both algorithms can be parallelized efficiently. See this MCSTL paper for some experimental results and implementation details. The MCSTL was the base for what is now the GNU C++ std-lib parallel mode.
It's not all clear which algorithm will perform better in all circumstances as it depends on data distribution and about whether swaps or comparisons are slower.
I think they are looking for merge-sort as an answer, since it is easy to see how to split this between threads. Though another comment indicates that qsort can also be split into smaller problems. Likely many can be split into smaller problems.
There is one critical aspect that cannot be ignored. Communicating with the other threads takes a lot of time. The data set your are sorting has to be huge, or very expensive to compare, before creating the threads and doing the communication between them will be better than just using a single thread.
Further to this, with any sort, you have a serious problem of false sharing. Having multiple threads work with the same data can (communication time notwithstanding) be slower as the CPU is forced to share and update data between multiple cores. Unless your algorithm can properly align the data, passing it off to various threads will slow it down.
精彩评论