开发者

Can C# and C++ interop for high-performance code?

We have legacy C++ code performing high-performance data processing (e.g., large volumes of data fed from hardware devices that is processed in a time-sensitive manner for display, transforms, and transfer to secondary storage).

We are interested in C#/.NET for new GUIs and new utilities (existing GUIs are C++ MFC and Qt). Of course, with the existing system we have no "language translation" issue to the .NET virtual machine (existing code is all C++).

After much study, and many books, I'm not sure this can be done effectively. Possible approaches (am I missing any?):

  1. Rewrite everything in .NET (can't happen -- too much code, bare-metal device access, time-sensitive heavy algorithm processing)
  2. Thin adapter layer for Managed C++/CLI
  3. Thick adapter layer for Managed C++/CLI
  4. Don't use .NET (managers feel great sadness)

Our concerns about (2) "thin adapter layer" is that it would be nice if the GUIs could (re-)use the logic in the "business" layer (many properties are algorithmically derived), so if we don't expose/wrap the C++ classes, much GUI logic will merely replicate the existing C++ logic开发者_如何学JAVA in the business layer.

Our concerns about (3) "thick adapter layer" is that it seems very tedious (expensive) to wrap each C++ class with a C# class, and several books suggest the boxing/unboxing access across that boundary appear to suggest this approach is quite unworkable/prohibitive (it's performance prohibitive beyond trivial designs).

How would you interface new C#/.NET (GUI) on top of a deep-rich-class-structure implemented in C++?


C++/CLI is perfect for this. There are no performance issues with the managed/unmanaged translation, since C++/CLI uses the same optimized call technique used by the .NET runtime engine itself to implement high-performance methods such as string concatenation.

The performance problems arise when you're copying data back and forth between .NET and native versions of the same data structure, but you'd have the same problem with e.g. using a library that uses BSTR alongside one that uses std::string, and the slow operations are equally obvious (unlike with P/Invoke, which tries to make these translations transparent, and ends up hiding the performance problems in the process).

There are also some tricks you can use to overcome this. For example, instead of copying a std::vector into a System::Collections::Generic::List, implement an IEnumerator that directly reads from the std::vector.

And of course, if the data is simply going to be passed directly back to another C++ function, there's no reason to convert it to a managed type at all. Again, C++/CLI makes preserving the format easy, where P/Invoke tries to convert everything behind your back.

In summary, the "thin" C++/CLI wrapper layer is the best of your options.


You have the right idea about the constraints. Crossing the boundary is expensive, you don't want to do it for fine-grained operations. How fine-grained? That depends, of course.

In the ideal case your C++ code is layered into a rational object model, over which you can put a COM layer (or similar) for larger-grained operations. As one example, rather than exposing an object with 5 properties, and a setter/getter pair on each one, you'd want to expose a SetProperties() method that accepts a map of all the properties to be set. This is by no means the only case you need to look out for; it's just an example of how to bias yourself toward larger-grained operations.

About COM - it's nice but not required of course. Using COM enforces a discipline on you, in that you need to formally define the operations in the COM interface. Without the formal enforcement provided by COM, your team could possibly "cheat" - and expose numerous tactical integration points between the layers, which can result in a sneaky performance problem.

If you have solid project management and good team members, then you could do the enforcement of these project standards without relying on the formality of COM. Define your own wrapper classes in C#, through which all boundary-crossing occurs.

The bottom line, I suspect, is that you won't know exactly the right decision, until you play around with it and test it.

Get a couple team members, good devs, and have them build prototypes of two different options. Eg, thin vs thick, where the exact meanings of those terms are defined by you. Give them 3 weeks or so to put something together. Then measure their productivity and performance, and make a decision based on that.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜