开发者

What is an appropriate sort algorithm for an embedded system?

I'm developing the software for an embedded system, and I need to implement a sorting routine, and I'm having trouble choosing an optimal solution. My requirements are as follows:

  1. Because this is a very memory-limited system, space complexity is a primary factor.
  2. Because the number of elements to sort will generally be small, and the sorting will happen only occasionally, time complexity is not necessarily a primary factor.
  3. A stable algorithm is a requirement for my application.
  4. Because this is an embedded system, code size is a factor.
  5. There is no guarantee that the data will initially be in a nearly-sorted order.

I've considered the following algorithms:

  • bubble sort (yes, even though I'm ashamed to say it)
  • gnome sort
  • insertion sort
  • in-place merge sort (though it seems to me that this is more ideal with linked lists than arrays?)

While the answer (for my exact circumstances) may very well be, "uh, duh, it doesn't really matter, use bubble sort for all we care开发者_JS百科", that answer is not very useful. In general, what sort algorithms are useful on embedded systems?


Insertion sort is fine too, it runs fast in practice, is stable and is in-place. It's very much related to gnome sort, faster in practice but for gnome sort the code is a bit smaller and it takes less auxiliary space (not in terms of big O, the difference is in the constant)

edit: yea I messed up a bit and got them reversed - I probably shouldn't answer questions before I've had my coffee.. it previously said that insertion sort has less code and space than gnome sort


Don't be ashamed of bubble sort, it has its place. If your data set is of a small size, it's easy to code up and is stable if you do it right (don't ever swap equal elements).

It can also be blindingly fast if your data is mostly sorted, by alternating directions on each pass. I know you said it wasn't initially near-sorted, I'm talking about the possibility that it becomes so if you sort then persist. In either case, if the data set size is small, it doesn't really matter if it's totally unsorted.

This is especially the case if, as you mention in a comment on another answer, your data set size is around 11. Short of a sort algorithm explicitly designed to be intentionally horrendous, any algorithm will easily handle such a size fast enough.

If your environment doesn't offer a stable sort, I'd just go with the bubble sort, given your constraints and properties.


In fact, using the following program along with the time utility, I found that the CPU time used for qsort and bubble sort only start making a difference once the element count got up to 10,000.

And, even then, the bubble sort took less than half a second. Unless you're going to be doing many sorts per second, that will be irrelevant.

#include <stdio.h>
#include <stdlib.h>

static int data[10000];
#define SZDATA (sizeof (*data))
#define NMDATA (sizeof (data) / sizeof (*data))

int compfn (const void *a, const void *b) {
    if (*((int*)a) > *((int*)b))
        return 1;
    if (*((int*)a) < *((int*)b))
        return -1;
    return 0;
}

int main (void) {
    int i, tmp, swapped, count;

    for (i = 0; i < NMDATA; i++)
        data[i] = (i * 3) % 11;

    if (0) {
        qsort (data, NMDATA, SZDATA, compfn);
    } else {
        swapped = 1;
        count = NMDATA;
        while (swapped) {
            swapped = 0;
            for (i = 1; i < count; i++) {
                if (data[i] < data[i-1]) {
                    tmp = data[i];
                    data[i] = data[i-1];
                    data[i-1] = tmp;
                    swapped = 1;
                }
            }
            count --;
        }
    }

    //for (i = 0; i < NMDATA; i++)
        //printf ("%10d\n", data[i]);

    return 0;
}

By varying the size of the data array and the if (0) statement, I got the following results (in milliseconds), five runs for each case:

Data size | qsort | bubble
----------+-------+-------
      100 |    61 |     76
          |    76 |     76
          |    77 |     61
          |    61 |     60
          |    61 |     61  avg qsort = 67, bubble = 67

     1000 |    77 |     93
          |    61 |     45
          |    76 |     77
          |    77 |     76
          |    76 |     77  avg qsort = 73, bubble = 74
          |       |
    10000 |    92 |    414
          |    77 |    413
          |    61 |    413
          |    76 |    405
          |    61 |    421  avg qsort = 73, bubble = 413

I suspect the faster bubble sorts with a small data set are so because of the lack of function calls.

The thing to take away from this is that, for smaller data sets, the efficiency of the algorithm is often unimportant since things like big-O are usually relevant as the data sets become larger.

However, this test was done in my environment and yours may vary considerably. I would suggest performing the same measurements in your environment - implement a bubble sort and only consider moving to a more complex algorithm if and when it becomes necessary.


At the suggestion of a commenter, I re-ran the tests using srand(42) followed by rand() to populate the array elements. In that case, the results were only slightly worse for the bubble sort, 642 vs 413 milliseconds for 10,000 elements and 82 vs 74 milliseconds for 1,000.

Given the constraints in the question (small element count, infrequent sorting, stability requirement, low space complexity), I still prefer the simplicity of bubble sort to any of the more complex algorithms.

Still, remember my earlier advice: you need to time this in your own environment. The results may be vastly different. You can use the code I've provided as a baseline for doing that. Seriously, if whatever method you choose takes less than a second, it's probably more than adequate for your purposes.


The fact that a system is embedded or not does not really matter. What matters is the very factors you list: code size constraints, available ram, speed needed, and number of elements.

With what you have outlined, a bubble sort is a perfectly acceptable solution: it is small, predictable, easy on memory, and very easy to implement and debug. I saw a proof in the early 1980s concluding that a bubble sort is time optimal for n ≤ 11. It's possible that modern quicksorts alter that slightly, but I doubt the break even could be reduced by much.

To make an unstable sort algorithm stable, add a sort key containing the original position. Refer to the secondary key only if the primary is a tie.


remove the "on embedded systems" and change the question to "In general, what sort algorithms are useful". Next, just try them! What have you tried so far, what was the memory consumption, what was the execution time and what was the code size?

Embedded or desktop you should be performing those experiments and asking those questions. There is no general answer because there are no general requirements. Each solution has its place depending on the requirements. One size fits all fits no one well. The real question is what are your requirements, then you worry about implementation and meeting those requirements. Is a sort even required, could this have been solved a different way? Where does this sort fit into your overall performance number? What are the other long poles in the tent?

Just like on a desktop, code them up and try them and see what you come up with. There is never a reason to be embarrassed about mentioning a bubble sort. If your requirements do not have performance or size numbers bubble sorts are good, everyone can read and understand the code, simple, maintainable, does not rely on libraries or versions of libraries, etc. Not one bad thing can be said (if there is no performance requirement).


Sorting in Embedded Systems, an article by Nigel Jones on Embedded Gurus is well worth a look.

He concludes that shell sort is his sorting algorithm of choice based on code size and performance.


Constraints on embedded systems vary with target platform and application requirements, it is often a trade-off between competing factors. The factors you need to consider may include:

  • Deterministic execution time
  • Deterministic memory usage
  • Minimisation of exectution time
  • Minimisation of memory use
  • Code size

For example you may have an algorithm that on average uses little memory, but that the amount of memory is not linear with the number of items in the sort. You may then be better off using an algorithm that one average uses a little more memory, but the memory usage is more bounded or linear. Similarly for execution speed, where a non deterministic algorithm depends on the initial order, while others are less dependent or invariant to initial order.

You may as seems likely in this case prioritise memory usage issues over execution speed.

So the answer is that there is no single answer, it depends on your specific requirements and constraints. This useful comparison table should be used to help select the most appropriate algorithm in your case.

All that said, for the purposes of expediting a development, it is always worth just trying the standard qsort() function supplied with your compiler's standard library. If that works for you an meets your constraints, no further work required.


check sort.c in
https://github.com/LouayAlsakka/embeddedlib.git

it is O(N) no extra memory requirements beside the list to be sorted and very small footprint (few hundreds of bytes of code for 16 bit instructions)

It read the list N X (number of bits) X 2 so for uint32_t it read it 64XN times. it can also easily extended to int, and string sorting as well.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜