Code Analysis Tools and Inter-Type-Declarations
I have a maven project generated by Spring Roo and use several tools (checkstyle, pmd etc.) to colle开发者_JAVA百科ct information about my project. (namely I am using codehaus' sonar for this)
Roo makes heavy use of AspectJ Inter Type Declarations (ITD) to seperate concerns like persistence, javabeans-getter/setters etc.
These ITDs are woven in at compile-time so tools like checkstyle and pmd (who work on source-level) have a lot of false positives.
The only solution I currently see, is to deactivate checks for Classes that use ITDs.
Any better ideas?
This answer would not help you right now, but hopefully it can be of interest for you, as it promises solution for your problem in a near future. I don't know whether you know IntelliJ IDEA - Java IDE from JetBrains, but there is already work being done in this direction, and here is the link to the dedicated issue that you might want to follow: http://youtrack.jetbrains.net/issue/IDEA-26959. Just set a watch on it - and get notified when the feature is implemented. IntelliJ IDEA provides really powerful SCA. So, ITD support should be of a high quality as well.
Doubt it will be a "niche problem" for much longer :-) Hopefully the tool vendors will look at the necessary enhancements.
Both FindBugs and Cobertura do NOT work on the source level, but on the bytecode level. So, you should wave in your aspects statically (that would also improve application start time) at the compile time (e.g. using maven's AspectJ plugin) and not at the load time and then run static analysis tools on the result bytecode.
If you want to reason about the source code after aspects have been woven into the code, you should weave the aspects into the source code rather than the binary code.
Many aspect weavers do binary-code weaving because they don't have access to the information (symbol table, names, types, expression types,...) produced by a compiler front end. So, the hack is, use the virtual machine code produced by the compiler (this stunt basically only works for VM instruction sets like the .net IL and java class codes) which often is easy to decode (nice, regular instruction set) decorated with symbol table information.
But if you can't reason about the binary results of such a weaving process, then you can't be sure than the woven program isn't buggy, which is the point of the OP's original question: "How do I run SCA tools on the (effective) woven source?".
You can fix this two ways:
- Get the community to write SCA tools that process the byte codes rather than the source. This might be hard because the source code may contain information lost in the compilation process.
- A better idea: Get the aspect community to write aspect weavers that operate on source code, and produce source code. This might be hard because getting full language front ends is difficult.
I can't help you make the community make a choice.
I can offer strong encouragement to help the community choose the second way: our DMS Software Reengineering Toolkit. This is a program transformation system that carries out directives of the form of, "if you see this, replace it by that" but honoring the syntax and the semantics of the language by actually applying such changes to compiler data structures produced by full language front ends. (This is the software engineering version of equational substitution in mathematics). The changed data structures can be re-exported as compilable source text, complete with comments.
If you understand what transformations can do in general, you can see that aspect weavers are a special case of program transformation systems. So, it is easy to implement aspect weavers using DMS, and the results are source code, which means you can apply the source-code analysis tools.
I doubt this actually solves the OPs problem of analyzing Roo-generated code in the short term :-{
Could you add tool-specific annotations/comments to the Java code to suppress the false positives? For example, FindBugs has its own @SuppressWarnings annotation.
精彩评论