Marshalling of a 32 bit int to a 16 bit int maching
I want to implement and understand the concept of marshalling over my own rpc mechanism (toy really). While I get the idea behind endian-开发者_如何学Pythonness, i am not sure how to handle 32bit and 16 bit ints. So the problem is that machine A has int represented at 32 bit and it wants to call a function int foo(int x) over an rpc call; however the server where this int is represented is 16 bit. Sending just the lower 16 bits would loose information and is not desirable.
I know IDL's work to solve this problem. But in this case lets say I use an IDL that "defines" int to be 32 bit. While this case works for my scenario, in the case of machine A with 16 bit int, 2 bytes will always be wasted when transmitting over the network.
If we flip the IDL to be 16bit, then the user has to manually split its local int and do something fancy, completely breaking the transparency of an RPC.
So what is the right way used in actual implementations?
thanks.
Usually, IDLs define several platform independent types (UInt8, Int8, UInt16, Int16, UInt32, Int32, UInt64, Int64) and few platform dependant, such as int, uint. The platform dependant types have only limited use, such as a size/index of arrays. It is recommended to use platform independent types for everything else.
If a parameter is declared in IDL as Int32 then on any platform it MUST be Int32. If it's declared as Int, then it depends on platform.
For example, COM VARENUM and VARIANT , as you can see there are platform independent types (such as SHORT (VT_UI2), LONG (VT_UI4), LONGLONG (VT_UI8)) and also machine types (such as INT (VT_INT)).
精彩评论