开发者

what is the ultimate difference between a 16-bit and 32-bit application?

32-bit x86 is a superset of 16-bit x86. Suppose I write a code in 16-bit x86. It should ideally work on system with 32-bit x86 without any hitch. But that is not the case. C开发者_开发百科ompatibility is an issue here. But why exactly? Is it because 32-bit OS installed on 32-bit x86 machine loads the programs differently in the memory and manages the memory differently? Are different memory-management requirements the real difference between 16-bit and 32-bit applications?


In Windows:

The major problem with running 16bit program in 32bit OS is that most of 16bit programs used to run on Real Mode, which is not supported anymore(by the OS). These modes are fundamentally different and therefore require software emulation. Also since all of the 16bit API stubs, DOS functions, and BIOS calls are not available, programs would not really be able to interact with the operating system, thus making them unusable without some kind of emulation. In case of Windows, NTVDM does all the emulation starting from Windows NT3.1.

Of course, if your program does not require any interaction with the OS, you should be able to run it. In terms of the opcodes and instruction set, it is true 32bit x86 is superset of 16bit x86. It's just that the environment in which the code usually runs on is completely different.


The only one difference between the 32 bit - and the 16 bit addressmode is the meaning and the usage of those operandsize- and addresssize prefixes.

what is meant by 32-bit application?

Operand size prefix in 16-bit mode


There's a related (16bit on 64bit OS) discussion at superuser here.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜