开发者

quick-sorts iterator requirements

tl;dr: Is it is possible to implement quicksort on a doubly linked list efficiently? My understanding before thinking about it was, no, its not.

The other day I had an opportunity to consider the iterator requirements for the basic sorting algorithms. The basic O(N²) ones are fairly straightforward.

  • Bubble sort - 2 forward iterators will do nicely, one dragging after the other.
  • Insertion sort - 2 bidirectional iterators will do. One for the out-of-order element, one for the insertion point.
  • Selection sort - A little bit trickier but forward iterators can do the trick.

Quicksort

The introsort_loop in std::sort (as in the gnu standard library/ hp(1994) / silicon graphics(1996) ) requires it to be random_access.

__introsort_loop(_RandomAccessIterator __first,
         _RandomAccessIterator __last,
         _Size __depth_limit, _Compare __comp)

As I have come to expect.

Now upon closer inspection I cant fin开发者_如何转开发d the real reason to require this for quicksort. The only thing that explicitly requires random_access_iterators is the std::__median call that requires the middle element to be calculated. The regular, vanilla quicksort does not calculate the median.

The partitioning consists of a check

 if (!(__first < __last))
    return __first;

Not really a useful check for bidirectionals. However one should be able to replace this with a check in the previous partitioning travel (from left to right/ right to left) with a simple condition of

if ( __first == __last ) this_partitioning_is_done = true;

Is it possible to implement quicksort fairly efficiently using only bidirectional iterators? The recursive depth can still be guarded.

NB. I have yet not attempted an actual implementation.


You need random access iterators because you typically want to pick the pivot element from the middle of the list. If you choose the first or last element as the pivot instead, bidirectional iterators are sufficient, but then Quicksort degenerates to O(n^2) for pre-sorted lists.


tl;dr: Yes

As you say, the problem is to find the pivot element, which is the element in the middle, finding this with random access takes O(1), finding it with bidirectional iterators takes O(n) (n/2 operations, to be precise). However, in each step you have to create to sub containers, the left and the right containing smaller and bigger numbers respectively. This is where the main work of quick sort takes place, right?

Now, when building the sub containers (for the recursion step) my approach would be to create an iterator h pointing to their respective front element. Now whenever you choose a next element to go to the sub container, simply advance h every second time. This will have h point to the pivot element once you are ready to descend to the new recursion step.

You only have to find the first pivot which does not matter really, because O(n log n + n/2) = O(n log n).

Actually this is just a runtime optimisation, but has no impact on the complexity, because whether you iterate over the list once (to put each value in the respective sub container) or twice (to find the pivot and then put each value in the respective sub container) is all the same: O(2n) = O(n).
It's simply a question of execution time (not complexity).


There's absolutely no problem with implementing quick-sort strategy on a doubly-linked list. (I think it can also be easily adapted to a singly-linked list as well). The only place in the traditional quick-sort algorithm that depends on the random-access requirement is the setup phase that uses something "tricky" to select the pivot element. In reality all these "tricks" are nothing more than just heuristics that can be replaced with pretty much equally effective sequential methods.

I have implemented quick-sort for linked lists before. There's nothing special about it, you just need to pay close attention to the proper element relinking. As you probably understand, large part of the value of list-sorting algorithms comes from the fact that you can reorder elements by relinking, instead of explicit value-swapping. Not only it could be faster, it also (and often - more importantly) preserves the value-validity of external references that might be attached to the elements of the list.

P.S. However, I'd say that for linked lists the merge-sort algorithm results in a significantly more elegant implementation, which has equally good performance (unless you are dealing with some cases that perform better with quick-sort specifically).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜