ReadProcessMemory - Buffer size affects function correctness
Here's an interesting problem:
I'm using ReadProcessMemory (from within C#) to write a simple debugger program. I need to go through a target process' entire memory space to find ce开发者_如何学Gortain strings of bytes (FWIW, I'm using Boyer-Moore to save time, it's pretty cool).
To do this, I'm using ReadProcessMemory to copy large blocks of memory, iterate through it in my program, and then move on to the next block (yes, I also take into account the case where a value might straddle the border between two blocks).
However, ReadProcessMemory returns different values depending on the size of the buffer it is told to copy into. For my investigation I used ReadProcessMemory on calc.exe (Windows 7 x64). I got consistent results:
Here is my NativeMethods P/Invoke signature:
[DllImport("Kernel32.dll", CallingConvention=CallingConvention.Winapi, SetLastError=true)]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern Boolean ReadProcessMemory(IntPtr process, void* baseAddress, void* destBuffer, IntPtr size, out IntPtr bytesRead);
And here is the code where I use it:
public IntPtr[] Search(Byte[] needle) {
OpenProcess();
List<IntPtr> ret = new List<IntPtr>();
Int32 iterations = (int)( (MaxAddr32bit + 1) / BlockSize );
IntPtr sizeOfBlock = new IntPtr( BlockSize );
IntPtr bytesRead;
byte* buffer = (byte*)Marshal.AllocHGlobal( sizeOfBlock );
for(int i=0;i<iterations;i++) {
void* blockAddr = (void*)(i * BlockSize);
bool ok = NativeMethods.ReadProcessMemory( _process, blockAddr, buffer, sizeOfBlock, out bytesRead);
if( bytesRead.ToInt64() > 0 ) {
switch(needle.Length) {
case 1: Search8 ( buffer, sizeOfBlock, ret, needle[0] ); break;
case 2: Search16( buffer, sizeOfBlock, ret, needle[0], needle[1] ); break;
case 4: Search32( buffer, sizeOfBlock, ret, needle[0], needle[1], needle[2], needle[3] ); break;
case 8: Search64( buffer, sizeOfBlock, ret, needle[0], needle[1], needle[2], needle[3], needle[4], needle[5], needle[6], needle[7] ); break;
}
}
}
Marshal.FreeHGlobal( new IntPtr(buffer) );
CloseProcess();
return ret.ToArray();
}
BlockSize
is a constant that I've been varying and getting different results.
When BlockSize
is a power of 2 less than or equal to 65536 (I tested 64, 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536) then calls to ReadProcessMemory fails until the value of blockAddr
is 0x10000 (65536), at which point ReadProcessMemory returns TRUE and reports non-zero bytesRead values.
However when BlockSize
is 20480 (20*2048, aka 20KB, which is not a power of two) then the function only returns TRUE when blockAddr
is 0x14000 (81920), which is strange, because the 32768 and 65536 block-sizes are greater than 20480 but return when blockAddr is 0x10000.
When I use even larger block sizes (including 128KB and 1024KB) the blockAddr value is even higher, for 128KB it was 0x60000 and for 1MB it was 0x600000.
Clearly I have to limit my program to 64KB-sized blocks of memory for risk of not being able to read all of a process' memory, which means my program would no-longer be correct, but why should a simple buffer size affect program correctness? All Windows is doing is a simple memory copy.
FWIW, I'm running Windows 7 x64. My program is C# compiled with AnyCPU so it runs as x64. The program I'm targeting is C:\Windows\calc.exe which is also x64.
Don't guess at this. Use VirtualQueryEx() to find out where the memory is mapped in the process and how large the blocks are.
精彩评论