开发者

When should recurring software timers fire in relation to their previous timeout?

I think this is one of those "vi vs. emacs" type of questions, but I will ask anyway as I would like to hear people's opinions.

Often times in an embedded system, the microcontroller has a hardware timer peripheral that provides a timing base for a software timer subsystem. This subsystem allows the developer to create an arbitrary (constrained by system resources) number of timers that can be used to generate and manage events in the system. The way the software timers are typically managed is that the hardware timer is setup to generate at a fixed interval (or sometimes only when the next active timer will expire). In the interrupt handler, a callback function is called to do things specific for that timer. As always, these callback routines should be very short since they run in interrupt context.

Let's say I create a timer that fires every 1ms, and its callback routine takes 100us to execute, and this is the only thing of interest happening in the system. When should the timer subsystem schedule the next handling of this software timer? Should it be 1ms from when the interrupt occurred, or 1ms from when the callback is completed?

To make things more interesting, say the hardware developer comes along and says that in certain modes of operation, the CPU speed needs to be reduced to 20% of maximum to save power. Now the callback routine takes 500us instead of 100us, but the timer's interval is still 1ms. Assume that t开发者_运维技巧his increased latency in the callback has no negative effect on the system in this standby mode. Again, when should the timer subsystem schedule the next handling of this software time? T+1ms or T+500us+1ms?

Or perhaps in both cases it should split the difference and be scheduled at T+(execution_time/2)+1ms?


In a real-time OS both timers and delays are synchronised to the system tick, so if the event processing takes less than one timer tick, and starts on a timer tick boundary, there would be no scheduling difference between using a timer or a delay.

If on the other hand the processing took more than one tick, you would require a timer event to ensure deterministic jitter free timing.

In most cases determinism is important or essential, and makes system behaviour more predictable. If timing were incremental from the end of processing, variability in the processing (either static - through code changes, or run-time through differencing execution paths), might lead to variable behaviour and untested corner cases that are hard to debug or may cause system failure.


I would have the hardware timer fire every 1ms. I've never heard of a hardware timer taking in such a quick routine into account. Especially since you would have to recalculate every time there was a software change. Or figure out what to do when the CPU changes clock speeds. Or figure out what to do if you decide to upgrade/downgrade the CPU you're using.


Adding another couple of reasons to what is at this point the consensus answer (the timer should fire every 1ms):

  • If the timer fires every 1ms, and what you really want is a 1ms gap between executions, you can reset the timer at the exit of your callback function to fire 1ms from that point.

  • However, if the timer fires 1ms after the callback function exits, and you want the other behavior, you are kind of stuck.

Further, it's far less complicated in the hardware to fire every 1ms. To do that, it just generates events and resets, and there's no feedback from the software back to the timer except at the point of setup. If the timer is leaving 1ms gaps, there needs to be some way for the software to signal to the timer that it's exiting the callback.

And you should certainly not "split the difference". That's doing the wrong thing for everyone, and it's even more obnoxious to work around if someone wants to make it do something else.


My inclination is to have the default behavior be to have a routine start at intervals that are as nearly uniform as practical, and to have a routine which is running late try to "catch up", within limits. Sometimes a good pattern can be something like:

/* Assume 32,768Hz interrupt, and that we want foo() to execute 1024x/second */

  typedef unsigned short ui; /* Use whatever size int works out best */
  ui current_ticks;  /* 32768Hz ticks */

  ui next_scheduled_event;
  ui next_event;

void interrupt_handler(void)
{
  current_ticks++;
  ...
  if ((ui)(current_ticks - next_event)  EVENT_INTERVAL*EVENT_MAX_BACKLOG)  /* We're 32 ticks behind -- don't even try to catch up */
    {
      delta = EVENT_INTERVAL*EVENT_MAX_BACKLOG;
      next_scheduled_event = current_ticks - delta;
    }
    next_scheduled_event += EVENT_INTERVAL;
    next_event = next_scheduled_event;

    foo();

    /* See how much time there is before the next event */
    delta = (ui)(current_ticks - next_event - EVENT_MIN_SPACING);
    if (delta > 32768)
      next_event = current_ticks + EVENT_MIN_GAP;
  }

This code (untested) will run foo() at a uniform rate if it can, but will always allow EVENT_MIN_SPACING between executions. If it is sometimes unable to run at the desired speed, it will run a few times with EVENT_MIN_SPACING between executions until it has "caught up". If it gets too far behind, its attempts to play "catch up" will be limited.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜