开发者

Why does Visual Studio use xchg ax,ax

I was looking through the disassmbly o开发者_StackOverflow中文版f my program (because it crashed), and noticed lots of

xchg    ax, ax

I googled it and found out it's essentially a nop, but why does visual studio do an xchg instead of a noop?

The application is a C# .NET3.5 64-bit application, compiled by visual studio


On x86 the NOP instruction is XCHG AX, AX

The 2 mnemonic instructions assemble to the same binary op-code. (Actually, I suppose an assembler could use any xchg of a register with itself, but AX or EAX is what's typically used for the nop as far as I know).

xchg ax, ax has the properties of changing no register values and changing no flags (hey - it's a no op!).


Edit (in response to a comment by Anon.):

Oh right - now I remember there are several encodings for the xchg instruction. Some take a mod/r/m set of bits (like many Intel x86 architecture instructions) that specify a source and destination. Those encodings take more than one byte. There's also a special encoding that uses a single byte and exchanges a general purpose register with (E)AX. If the specified register is also (E)AX then you have a single-byte NOP instruction. you can also specify that (E)AX be exchanged with itself using the larger variant of the xchg instruction.

I'm guessing that MSVC uses the multiple byte version of xchg with (E)AX as the source and destination when it wants to chew up more than one byte for no operation - it takes the same number of cycles as the single byte xchg, but uses more space. In the disassembly you won't see the multiple byte xchg decoded as a NOP, even if the result is the same.

Specifically xchg eax, eax or nop could be encoded as opcodes 0x90 or 0x87 0xc0 depending on whether you want it to use up 1 or 2 bytes. The Visual Studio disassembler (and probably others) will decode the opcode 0x90 as the NOP instruction and will decode opcode 0x87 0xc0 as xchg eax, eax.

It's been a while since I've done detailed assembly language work, so chances are I'm wrong on at least one count here...


xchg ax,ax and nop are actually the same instruction, they map to the same opcode (0x90 iirc). That's fine, xchg ax,ax is a No-Op. Why should one waste extra opcode encodings with instructions that don't do anything?

Questionable is why you see both mnemonics printed. I guess it's just a flaw in your disassembly, there is no binary difference.


Actually, xchg ax,ax is just how MS disassembles "66 90". 66 is the operand size override, so it supposedly operates on ax instead of eax. However, the CPU still executes it as a nop. The 66 prefix is used here to make the instruction two bytes in size, usually for alignment purposes.


MSVC puts NOPs in the compiled code for debug builds, generally. This allows Edit & Continue to work.


I don't know if it has something to do with the question but many Windows functions begin with MOV EDI, EDI. That's also a 2 byte NOP. Two bytes NOP's are useful to hotpatch code, because you can safely replace it with a short JMP.

Reference: http://blogs.msdn.com/b/oldnewthing/archive/2011/09/21/10214405.aspx


The real question here is; why did the disassembler choose to display it as xchg ax,ax and not nop?

I suspect that this is from 32-bit or 64-bit code and (given that the disassembler displayed xchg ax,ax and not xchg eax,eax) that there's an operand size override prefix intended to make the nop slightly larger (to achieve a certain amount of padding with fewer instructions); and the existence of prefixes has confused the disassembler resulting in xchg ax,ax (rather than nop or o16 nop).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜