Is stack memory contiguous?
How does the compiler enforce the stack memory to be开发者_开发知识库 contiguous, does it cause the memory to be moved everytime while the program is running or does it reserve the memory on stack needed by program before running it?
The stack for a given thread is often contiguous in virtual memory (on Linux and similar systems, and in user mode in Windows). The Windows kernel (in Windows Vista and above) and z/OS allow discontiguous stacks in virtual memory, and GCC 4.6 will also allow that. The compiler does not need to move the stack around at all, even for the systems that have discontiguous virtual addresses for the stack; they just change where new parts are allocated. The operating system might remap physical pages to virtual ones so that the stack may not be contiguous in physical memory, though, even if it is in virtual memory.
There are no requirements for the stack to be contiguous in the language the OS or the hardware.
I challenge anybody to site a reference that explicitly says this is a requirement.
Now a lot of implementations do use contiguous memory because it is simple. This is also how the stack concept is taught to CS students (Stack grows down heap expands up). But there is no requirements to do this. I believe that MS even experimented with placing stack frames in random locations in the heap to prevent attacks the used deliberate stack smashing techniques.
The only requirement of the stack is that frames are linked. Thus allowing the stack to push/pop frames as scopes are entered/left.
But this all orthogonal to the original question.
The compiler does not try and force the stack to be in contiguous memory. There is no requirements at the language level that require the stack to be contiguous.
How is the stack usually implemented.
If this was the question. Then you would get a more detailed and accurate answer from the community.
You have your memory address space, let's say it runs from 1 to 100. You allocate your stack from 1 upwards and you allocate your heap from 100 downwards. Ok so far?
Due to the very nature of the stack it's always compact (has no holes). That happens because everything that's in the stack is the context of some function that was called. Whenever a function exits, its context is removed from the top of the stack and we fall back to the previous function. I think you can understand it well if you get a debugger and just follow the function calls while keeping in mind how the stack must be.
Heap, on the other hand is not so well behaved, let's say that we have reserved memory from 70 to 100 for heap. We may allocate a block of 4 bytes there and it might go from 70 to 74 then we allocate 4 bytes more and now we have memory allocated from 70 to 78. But that memory may be deallocated at any point of the program. So you might deallocate the 4 bytes you allocated at the beginning, thus creating a hole.
That's how things happen in you address space. There's a table that the kernel keeps that maps pages from the address space to pages in real memory. As you probably have noticed, you can't hope to have all everything set up that nicely when you have more than one program running. So what kernel does is make each process think the whole address space is contiguous memory (let's not think about memory mapped devices for now), even though it might be mapped non-contiguously in memory.
I hope to have given a reasonable overview on the subject, but there are probably better authors than me, that you'll probably enjoy reading much more. So look for texts on virtual memory, it might be a nice starting point for you to understand what you want. There are several books that will describe it in greater or lesser detail. A few that I know of: Structured computer organization, by tanenbaum; Operating System Concept, by Silberschatz. I'm pretty sure Knuth discusses it in his algorithm books as well. If you feel adventurous, you might try reading x86 implementation of it on intel manuals.
精彩评论