开发者

Does "DO-178B level A" prohibits optimizing compilers?

There is an "DO-178B" level A and level B certification for airborne systems. Does it prohibit using of optimizating compilers?

E.g. Some compilers wil开发者_开发百科l reorder instructions to get more performance. Does DO-178B lev.A or lev.B prohibits this reordering?

Most modern CPU have such reordering builtin in the hardware. Are they allowed to be used within DO-178B lev.A softare/hardware systems?


First, and critically: For this type of question, if the answer matters, you need to get a formal professional opinion from someone who is competent to provide it, or discuss this with your certification authority. Any reply you will get here should not be relied on.

With that said, I will assume you are asking from a point of curiousity and will not be relying on the answer in any meaningful way, and I will attempt to answer in that vein. I am not a professional, and this is not professional advice.

The most on-point documentation I could find online with a quick search was this FAA guideline paper about a related topic: http://www.faa.gov/aircraft/air_cert/design_approvals/air_software/cast/cast_papers/media/cast-12.pdf. This paper describes the conditions under which one must do verification of the generated object code rather than the source code. In particular, it gives a number of examples that will occur even in non-optimized code -- automatic variable initialization and exception handling are a couple of examples. On compiler optimization, it notes:

Compiler optimization is another area addressed under section 4.4.2a of DO-178B/ED-12B. This involves the analytical determination that the optimization features do not compromise the ability of the test cases to demonstrate requirements-based testing and structural coverage consistent with the software level. This is a separate issue from the traceability and additional verifications issues addressed by Section 4.4.2b. This is outside the scope of this paper.

I do not have a copy of DO-178B handy to read section 4.4.2a, but I would note that (a) there are procedures for handling other cases where the object code does not correspond to the source code in a one-to-one manner, and (b) this pretty strongly implies that compiler optimization is discussed rather than outright prohibited.

It's also pretty clear from a number of the discussions in that paper that the answer to "we can't trace things between the source code and the object code" is to validate the object code in some manner -- in other words, there is a solution other than prohibiting such things.

Thus, I would conclude that at least some compiler optimizations must be permitted.

In particular, the sort of reordering that you describe is quite traceable, and it seems almost certain to me that it would be permitted.


DO-178B is not absolute and is open to interpretation. If you switch off optimisation there is no questions and nothing to explain. By sticking to the most obvious interpretation you avoid having to sell your interpretation to certification authorities later on and opening your self up to questions about how you did things.

When you optimise your code it is hard to do the source to instruction traceability that is required for level A. In addition if you are using Do-178B getting that extra 5% out of your software is not your greatest concern. The ease of completing all the required certification steps should be your primary concern since that is what is going to be sucking up all your time.

The hardware part of your question is interesting. For software optimisation code is not just reordered it is changed as well. But for hardware the code is not changed to get higher speed only the execution order. I have to ask around to get more info on what the thinking is on this.


I have only superficial knowledge of DO-178B (I do not work day-to-day with it, but I build tools for people who do).

The standard takes traceability very seriously. High-level requirements are declined into low-level requirements, which are implemented by the source code, which is compiled by the compiler. At each of these steps, one must be able to justify what was done in terms of the specifications produced by the previous step.

For the compiler, this means that one must be able to read the assembly and trace one particular instruction to the source code statement that caused this instruction to be generated.

So, in short, yes, I think this prohibits most optimizations.

Concerning the hardware this software is run on, it is verified differently (but I guess just as stringently). The relevant standard is then DO-254, and I do not know anything about it.


With optimization, you need to verify the generated code at the object assembly language level. There are compiler suites and libraries for embedded real-time multitasking that have been previously verified in other projects, giving you a comfort level that they can be verified again - but you still need to verify the code used in your application.


To avoid delays and having to explain things just turn off optimizations and cache. This makes the code deterministic. Also try not to use GCC if possible and go for a qualified compiler such as IAR or DDCI or Irvine Compilers or something. Instead of trying bang the screw with a fancy hammer get a screw driver that works for the screw. Because when that plane crashes with 200 people on board, with mothers, fathers and children and they find out that the compiler reordered code and that caused the failure you will wish that you only had the right screw driver.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜