Is Virtual memory really useful all the time?
Virtual memory is a good concept currently used by modern operating systems. But I was stuck answering a question and was not sure enough about it. Here is the question:
Suppose there are only a few applications running on a machine, such that the physical memory of system is more than the memory required by all the applications. To support virtual memory, the OS needs to do a lot work. So if the running applications all fit in the physic开发者_开发技巧al memory, is virtual memory really needed?
(Furthermore, the applications running together will always fit in RAM.)
Even when the memory usage of all applications fits in physical memory, virtual memory is still useful. VM can provide these features:
- Privileged memory isolation (every app can't touch the kernel or memory-mapped hardware devices)
- Interprocess memory isolation (one app can't see another app's memory)
- Static memory addresses (e.g. every app has
main()
at address 0x0800 0000) - Lazy memory (e.g. pages in the stack are allocated and set to zero when first accessed)
- Redirected memory (e.g. memory-mapped files)
- Shared program code (if more than one instance of a program or library is running, its code only needs to be stored in memory once)
While not strictly needed in this scenario, virtual memory is about more than just providing "more" memory than is physically available (swapping). For example, it helps avoiding memory fragmentation (from an application point of view) and depending on how dynamic/shared libraries are implemented, it can help to avoid relocation (relocation is when the dynamic linker needs to adapt pointers in a library or executable that was just loaded).
A few more points to consider:
- Buggy apps that don't handle failures in the memory allocation code
- Buggy apps that leak allocated memory
Virtual memory reduces severity of these bugs.
The other replies list valid reasons why virtual memory is useful but I would like to answer the question more directly : No, virtual memory is not needed in the situation you describe and not using virtual memory can be the right trade-off in such situations.
Seymour Cray took the position that "Virtual memory leads to virtual performance." and most (all?) Cray vector machines lacked virtual memory. This usually leads to higher performance on the process level (no translations needed, processes are contiguous in RAM) but can lead to poorer resource usage on the system level (the OS cannot utilize RAM fully since it gets fragmented on the process level).
So if a system is targeting maximum performance (as opposed to maximum resource utilization) skipping virtual memory can make sense.
When you experience the severe performance (and stability) problems often seen on modern Unix-based HPC cluster nodes when users oversubscribe RAM and the system starts to page to disk, there is a certain sympathy with the Cray model where the process either starts and runs at max performance, or it doesn't start at all.
精彩评论