开发者

Are Exceptions still undesirable in Realtime environment?

A couple of years ago I was taught, that in real-time applications such as Embedded Systems or (Non-Linux-)Ke开发者_开发百科rnel-development C++-Exceptions are undesirable. (Maybe that lesson was from before gcc-2.95). But I also know, that Exception Handling has become better.

So, are C++-Exceptions in the context of real-time applications in practice

  • totally unwanted?
  • even to be switched off via via compiler-switch?
  • or very carefully usable?
  • or handled so well now, that one can use them almost freely, with a couple of things in mind?
  • Does C++11 change anything w.r.t. this?

Update: Does exception handling really require RTTI to be enabled (as one answerer suggested)? Are there dynamic casts involved, or similar?


Exceptions are now well-handled, and the strategies used to implement them make them in fact faster than testing return code, because their cost (in terms of speed) is virtually null, as long as you do not throw any.

However they do cost: in code-size. Exceptions usually work hand in hand with RTTI, and unfortunately RTTI is unlike any other C++ feature, in that you either activate or deactivate it for the whole project, and once activated it will generated supplementary code for any class that happens to have a virtual method, thus defying the "you don't pay for what you don't use mindset".

Also, it does require supplementary code for its handling.

Therefore the cost of exceptions should be measured not in terms of speed, but in terms of code growth.

EDIT:

From @Space_C0wb0y: This blog article gives a small overview, and introduces two widespread methods for implementing exceptions Jumps and Zero-Cost. As the name implies, good compilers now use the Zero-Cost mechanism.

The Wikipedia article on Exception Handling talk about the two mechanisms used. The Zero-Cost mechanism is the Table-Driven one.

EDIT:

From @Vlad Lazarenko whose blog I had referenced above, the presence of exception thrown might prevent a compiler from inlining and optimizing code in registers.


Answer just to the update:

Does exception handling really require RTTI to be enabled

Exception-handling actually requires something more powerful than RTTI and dynamic cast in one respect. Consider the following code:

try {
    some_function_in_another_TU();
} catch (const int &i) {
} catch (const std::logic_error &e) {}

So, when the function in the other TU throws, it's going to look up the stack (either check all levels immediately, or check one level at a time during stack unwinding, that's up to the implementation) for a catch clause that matches the object being thrown.

To perform this match, it might not need the aspect of RTTI that stores the type in each object, since the type of a thrown exception is the static type of the throw expression. But it does need to compare types in an instanceof way, and it needs to do this at runtime, because some_function_in_another_TU could be called from anywhere, with any type of catch on the stack. Unlike dynamic_cast, it needs to perform this runtime instanceof check on types which have no virtual member functions, and for that matter types which are not class types. That last part doesn't add difficulty, because non-class types have no hierarchy, and so all that's needed is type equality, but you still need type identifiers that can be compared at runtime.

So, if you enable exceptions then you need the part of RTTI that does type comparisons, like dynamic_cast's type comparisons but covering more types. You don't necessarily need the part of RTTI that stores the data used to perform this comparison in each class's vtable, where it's reachable from the object -- the data could instead only be encoded at the point of each throw expression and each catch clause. But I doubt that's a significant saving, since typeid objects aren't exactly massive, they contain a name that's often needed anyway in a symbol table, plus some implementation-defined data to describe the type hierarchy. So probably you might as well have all of RTTI by that point.


The problem with exceptions is not necessarily the speed (which may differ greatly, depending on the implementation), but it's what they actually do.

In the real-time world, when you have a time constraint on an operation, you need to know exactly what your code does. Exceptions provide shortcuts that may influence the overall run time of your code (exception handler may not fit into the real-time constraint, or due to an exception you might not return the query response at all, for example).

If you mean "real-time" as in fact "embedded", then the code size, as mentioned, becomes an issue. Embedded code may not necessarily be real-time, but it can have size constraint (and often does).

Also, embedded systems are often designed to run forever, in an infinite event loop. Exception may take you somewhere out of that loop, and also corrupt your memory and data (because of the stack unwinding) - again, depends on what you do with them, and how the compiler actually implements it.

So better safe than sorry: don't use exceptions. If you can sustain occasional system failures, if you're running in a separate task than can be easily restarted, if you're not really real-time, just pretend to be - then you probably can give it a try. If you're writing software for a heart-pacer - I would prefer to check return codes.


C++ exceptions still aren't supported by every realtime environment in a way that makes them acceptable everywhere.

In the particular example of video games (which have a soft 16.6ms deadline for every frame), the leading compilers implement C++ exceptions in such a way that simply turning on exception handling in your program will significantly slow it down and increase code size, regardless of whether you actually throw exceptions or not. Given that both performance and memory are critical on a game console, that's a dealbreaker: the PS3's SPU units, for example, have 256kb of memory for both code and data!

On top of this, throwing exceptions is still quite slow (measure it if you don't believe me) and can cause heap deallocations which are also undesirable in cases where you haven't got microseconds to spare.

The one... er... exception I have seen to this rule is cases where the exception might get thrown once per app run -- not once per frame, but literally once. In that case, structured exception handling is an acceptable way to catch stability data from the OS when a game crashes and relay it back to the developer.


The implementation of the exception mechanism is usually very slow when an exception is thrown, otherwise the costs of using them is almost none. In my opinion exceptions are very useful if you use them correctly.

In RT applications, exceptions should be thrown only when something goes bad and the program has to stop and fix the issue (and possible wait for the user interaction). Under such circumstances, it takes longer to fix the issue.

Exceptions provide hidden path of reporting an error. They make the code more shorter and more readable, therefore easier maintenance.


Typical implementations of C++ exception handling were still not ideal, and might cause the entire language implementation almost unusable for some embedded targets with extremely limited resources, even if the user code is not explicitly using these features. This is referred as "zero overhead principle violation" by recent WG21 papers, see N4049 and N4234 for details. In such environments, exception handling does not work as expected (consuming reasonable system resources) whether the application is real-time or not.

However, there should be real-time applications in embedded environments which can afford these overhead, e.g. a video player in a handheld device.

Exception handling should always be used carefully. Throwing and catching exceptions per frame in a real-time application for any platforms (not only for embedded environments) is a bad design/implementation and not acceptable in general.


There are generally 3 or 4 constraints in embedded / realtime development - especially when that implies kernel mode development

  • at various points - usually while handling hardware exceptions - operations MUST NOT throw more hardware exceptions. c++'s implicit data structures (vtables) and code (default constructors & operators & other implicitly generated code to support the c++ exception mechanisim) are not placeable, and cannot as a result be guaranteed to be placed in non paged memory when executed in this context.

  • Code quality - c++ code in general can hide a lot of complexity in statements that look trivial making code difficult to visually audit for errors. exceptions decouple handling from location, making proving code coverage of tests difficult.

  • C++ exposes a very simple memory model: new allocates from an infinite free store, until you run out, and it throws an exception. In memory constrained devices, more efficient code can be written that makes explicit use of fixed size blocks of memory. C+'s implicit allocations on almost any operation make it impossible to audit memory use. Also, most c++ heaps exhibit the disturbing property that there is no computable upper limit on how long a memory allocation can take - which again makes it difficult to prove the response time of algorithms on realtime devices where fixed upper limits are desirable.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜