How to spread processes over time getting minimum number of "collisions"
I'm developing a scheduler for an embedded system. This scheduler will call each process every X milliseconds; this time can be configured separately for each process, of course.
Everything is coded and calls every process as it should; the problem I'm facing is this: Imagine I set 4 processes to be called every 10, 15, 5 and 30 milliseconds respectively:
A: 10ms
B: 15ms
C: 5ms
D: 30ms
The resulting calling over time will be:
A |
A B A B |
C C C C C C C | processes being called
D |
----------------------------------
0 5 10 15 20 25 30 35... ms
The problem is that when 30ms is reached, all processes are called at the same moment (one after another) and this can delay the correct execution from here.
This can be solved by adding a delay to each process (but preserving its calling frequency), so the frequencies stops being multiples of each other. My problem is that I don't know how to calculate the delay to apply to each process so the number of collisions is minimized.
Is there any known algorithm for this开发者_JAVA技巧, or some mathematical guidance?
Thank you.
Given a set of intervals, you can find the time at which the start times would coincide (assuming no offsets) by finding the least common multiple as mentioned by Jason in a comment to your post. You can find the LCM by doing the prime factorization of the intervals for a set of tasks.
It seems, though, that the greatest common divisor (or greatest common factor GCF) might be the most useful number to compute. That number will give you interval at which repeats will happen. In your example, the GCF is 5. With a GCF of 5, it is possible to add an initial offset of 1, 2, 3, etc. to each task to avoid overlapping start times. Thus, with a GCF of 5, you can have up to 5 tasks that have start times that would never overlap. With a GCF of 20, you could have up to 20 tasks scheduled with no overlapping start times. If two (or more) tasks are relatively prime (GCF=1), then an overlap will definitely occur no matter what offset you use for those tasks if the intervals never change.
There is no perfect solution for this, they will collide from time to time. I would suggest to add tiny(0.01-0.1ms) random value to cycle length, so in the long term they will really rarely called at the same time.
Alternatively, if you have 5ms scheduler granularity, first thread is always called at X+1ms, second at X+2, e.t.c, so that it is always guaranteed 1ms of uninterrupted run (if you have 10 threads, then it will be X+0.5, X+1, X+1.5). But this might get quite tricky to implement.
This kind of problem relates directly the domain of real-time programming and scheduling algorithms. I took a class on this subject in college, and if I remember well, Rate-monotonic scheduling is the kind of algorithm you are looking for.
The idea is that you assign priorities to jobs that are inversely proportional to their period, ie the smaller the period, the higher the priority. But this works better if you can interrupt your jobs and resume them later.
There are other alternatives, though, like EDF (earliest deadline first), but these are dynamic scheduling algorithms (ie the priorities are assigned during the execution).
The easy solution is to change the schedule in which you call the subroutines. I.e. instead of 5, 10, 15, and 30 ms, can you live e.g. with 5, 15, 15 and 30? Then you can use the following pattern: (A=5 ms proc, B,C=15 ms proc, D=30 ms proc):
AAAAAAAAAAAAAAAAAAAA ...
B B B B B B B ...
C C C C C C C ...
D D D ...
I'm sure you can generalize this idea, but it works only if you can actually change the static intervals.
If you can't change the intervals, and also you need to obey them strictly, then I you are kind of out of luck as there are no parameters to change :)
精彩评论