开发者

The new IntPtr.Add method - am I missing the point of the int?

Starting from FW 4.0, the IntPtr structure has the Add method:

public static IntPtr Add(
    IntPtr pointer,
    int offset
)

Which is great, as it's supposed to address all those questions on IntPtr math we have had (1, 2, probably more).

But why is the offset int?

Must it not be IntPtr? I can easily imagine offsetting a 64-bit pointer by a value which is beyond the int range.


For instance, consider Marshal.OffsetOf:

public static IntPtr OffsetOf(
    Type t,
    string fieldName
)

It returns an IntPtr as the offset to the structure member. Which makes perfect sense! And you cannot easily use this offset with the new Add method. You'd have to cast it to Int64, then call Add several times in a loop.

Also, it seems to kill the very idea of the IntPtr.Size being irrelevant to a 开发者_如何转开发properly written application. You will have to cast the offset to a particular type, such as Int64, at which point you must start managing the size difference. And image what will happen when 128-bit IntPtr appears.


My question here is, why?

Am I correct in my conclusions, or am I missing the point?


It corresponds to a restriction in the x64 architecture. Relative addressing is limited to a signed 32-bit offset value. Matt Pietrek mentions this in this article (near "Luckily, the answer is no"). This restriction also explains why .NET objects are still limited to 2GB in 64-bit mode. Similarly, in native x64 C/C++ code memory allocations are limited as well. It is not that it is impossible, the displacement could be stored in a 64-bit register, it is just that this would make array indexing a lot more expensive.

The mysterious return type of Marshal.OffsetOf() is probably a corner-case. A managed struct could result in an unmanaged version after applying [StructLayout] and [MarshalAs] that's larger than 2GB.

Yes, this wouldn't map well to some future 128-bit architecture. But it is extraordinarily difficult to prep today's software for an arch when nobody knows what it will look like. Perhaps the old adage fits, 16 Terabytes ought to be enough for anybody. And there's lots of room left to grow beyond that, 2^64 is rather a large number. Current 64-bit processors only implement 2^48. Some seriously non-trivial problems need to be solved before machines can move that close.


If you define only:

public static IntPtr Add(IntPtr pointer, IntPtr offset)

then, adding a 32 bits offset to a 64 bits pointer is less readable, IMHO.

Again, if you define

public static IntPtr Add(IntPtr pointer, long offset)

then, adding a 64 bits offset to a 32 bits pointer is also bad.

By the way, Substract returns an IntPtr, so the IntPtr logic is not broken in anyway.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜