Why doesn't my program crash when I write past the end of an array?
Why does the code below work without any crash @ runtime ?
And also the size is completely dependent on machine/platform/compiler!!. I can even give upto 200 in a 64-bit machine. how would a segmentation fault in main function get detected in the OS?
int main(int argc, ch开发者_运维问答ar* argv[])
{
int arr[3];
arr[4] = 99;
}
Where does this buffer space come from? Is this the stack allocated to a process ?
Something I wrote sometime ago for education-purposes...
Consider the following c-program:
int q[200];
main(void) {
int i;
for(i=0;i<2000;i++) {
q[i]=i;
}
}
after compiling it and executing it, a core dump is produced:
$ gcc -ggdb3 segfault.c
$ ulimit -c unlimited
$ ./a.out
Segmentation fault (core dumped)
now using gdb to perform a post mortem analysis:
$ gdb -q ./a.out core
Program terminated with signal 11, Segmentation fault.
[New process 7221]
#0 0x080483b4 in main () at s.c:8
8 q[i]=i;
(gdb) p i
$1 = 1008
(gdb)
huh, the program didn’t segfault when one wrote outside the 200 items allocated, instead it crashed when i=1008, why?
Enter pages.
One can determine the page size in several ways on UNIX/Linux, one way is to use the system function sysconf() like this:
#include <stdio.h>
#include <unistd.h> // sysconf(3)
int main(void) {
printf("The page size for this system is %ld bytes.\n",
sysconf(_SC_PAGESIZE));
return 0;
}
which gives the output:
The page size for this system is 4096 bytes.
or one can use the commandline utility getconf like this:
$ getconf PAGESIZE
4096
post mortem
It turns out that the segfault occurs not at i=200 but at i=1008, lets figure out why. Start gdb to do some post mortem ananlysis:
$gdb -q ./a.out core
Core was generated by `./a.out'.
Program terminated with signal 11, Segmentation fault.
[New process 4605]
#0 0x080483b4 in main () at seg.c:6
6 q[i]=i;
(gdb) p i
$1 = 1008
(gdb) p &q
$2 = (int (*)[200]) 0x804a040
(gdb) p &q[199]
$3 = (int *) 0x804a35c
q ended at at address 0x804a35c, or rather, the last byte of q[199] was at that location. The page size is as we saw earlier 4096 bytes and the 32-bit word size of the machine gives that an virtual address breaks down into a 20-bit page number and a 12-bit offset.
q[] ended in virtual page number:
0x804a = 32842 offset:
0x35c = 860 so there were still:
4096 - 864 = 3232 bytes left on that page of memory on which q[] was allocated. That space can hold:
3232 / 4 = 808 integers, and the code treated it as if it contained elements of q at position 200 to 1008.
We all know that those elements don’t exists and the compiler didn’t complain, neither did the hw since we have write permissions to that page. Only when i=1008 did q[] refer to an address on a different page for which we didn’t have write permission, the virtual memory hw detected this and triggered a segfault.
An integer is stored in 4 bytes, meaning that this page contains 808 (3236/4) additional fake elements meaning that it is still perfectly legal to access these elements from q[200], q[201] all the way up to element 199+808=1007 (q[1007]) without triggering a seg fault. When accessing q[1008] you enter a new page for which the permission are different.
Since you're writing outside the boundaries of your array, the behaviour of your code in undefined.
It is the nature of undefined behaviour that anything can happen, including lack of segfaults (the compiler is under no obligation to perform bounds checking).
You're writing to memory you haven't allocated but that happens to be there and that -- probably -- is not being used for anything else. Your code might behave differently if you make changes to seemingly unrelated parts of the code, to your OS, compiler, optimization flags etc.
In other words, once you're in that territory, all bets are off.
Regarding exactly when / where a local variable buffer overflow crashes depends on a few factors:
- The amount of data on the stack already at the time the function is called which contains the overflowing variable access
- The amount of data written into the overflowing variable/array in total
Remember that stacks grow downwards. I.e. process execution starts with a stackpointer close to the end of the memory to-be-used as stack. It doesn't start at the last mapped word though, and that's because the system's initialization code may decide to pass some sort of "startup info" to the process at creation time, and often do so on the stack.
That is the usual failure mode - a crash when returning from the function that contained the overflow code.
If the total amount of data written into a buffer on the stack is larger than the total amount of stackspace used previously (by callers / initialization code / other variables) then you'll get a crash at whatever memory access first runs beyond the top (beginning) of the stack. The crashing address will be just past a page boundary - SIGSEGV
due to accessing memory beyond the top of the stack, where nothing is mapped.
If that total is less than the size of the used part of the stack at this time, then it'll work just ok and crash later - in fact, on platforms that store return addresses on the stack (which is true for x86/x64), when returning from your function. That's because the CPU instruction ret
actually takes a word from the stack (the return address) and redirects execution there. If instead of the expected code location this address contains whatever garbage, an exception occurs and your program dies.
To illustrate this: When main()
is called, the stack looks like this (on a 32bit x86 UNIX program):
[ esp ] <return addr to caller> (which exits/terminates process)
[ esp + 4 ] argc
[ esp + 8 ] argv
[ esp + 12 ] envp <third arg to main() on UNIX - environment variables>
[ ... ]
[ ... ] <other things - like actual strings in argv[], envp[]
[ END ] PAGE_SIZE-aligned stack top - unmapped beyond
When main()
starts, it will allocate space on the stack for various purposes, amongst others to host your to-be-overflowed array. This will make it look like:
[ esp ] <current bottom end of stack>
[ ... ] <possibly local vars of main()>
[ esp + X ] arr[0]
[ esp + X + 4 ] arr[1]
[ esp + X + 8 ] arr[2]
[ esp + X + 12 ] <possibly other local vars of main()>
[ ... ] <possibly other things (saved regs)>
[ old esp ] <return addr to caller> (which exits/terminates process)
[ old esp + 4 ] argc
[ old esp + 8 ] argv
[ old esp + 12 ] envp <third arg to main() on UNIX - environment variables>
[ ... ]
[ ... ] <other things - like actual strings in argv[], envp[]
[ END ] PAGE_SIZE-aligned stack top - unmapped beyond
This means you can happily access way beyond arr[2]
.
For a taster of different crashes resulting from buffer overflows, attempt this one:
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
int i, arr[3];
for (i = 0; i < atoi(argv[1]); i++)
arr[i] = i;
do {
printf("argv[%d] = %s\n", argc, argv[argc]);
} while (--argc);
return 0;
}
and see how different the crash will be when you overflow the buffer by a little (say, 10) bit, compared to when you overflow it beyond the end of the stack. Try it with different optimization levels and different compilers. Quite illustrative, as it shows both misbehaviour (won't always print all argv[]
correctly) as well as crashes in various places, maybe even endless loops (if, e.g., the compiler places i
or argc
into the stack and the code overwrites it during the loop).
By using an array type, which C++ has inherited from C, you are implicitly asking not to have a range check.
If you try this instead
void main(int argc, char* argv[])
{
std::vector<int> arr(3);
arr.at(4) = 99;
}
you will get an exception thrown.
So C++ offers both a checked and an unchecked interface. It is up to you to select the one you want to use.
That's undefined behavior - you simply don't observe any problems. The most likely reason is you overwrite an area of memory the program behavior doesn't depend on earlier - that memory is technically writable (stack size is about 1 megabyte in size in most cases) and you see no error indication. You shouldn't rely on this.
To answer your question why it is "undetected": Most C compilers do not analyse at compile time what you are doing with pointers and with memory, and so nobody notices at compile time that you've written something dangerous. At runtime, there is also no controlled, managed environment that babysits your memory references, so nobody stops you from reading memory that you aren't entitled to. The memory happens to be allocated to you at that point (because its just part of the stack not far from your function), so the OS doesn't have a problem with that either.
If you want hand-holding while you access your memory, you need a managed environment like Java or CLI, where your entire program is run by another, managing program that looks out for those transgressions.
Your code has Undefined Behavior. That means it can do anything or nothing. Depending on your compiler and OS etc., it could crash.
That said, with many if not most compilers your code will not even compile.
That's because you have void main
, while both the C standard and the C++ standard requires int main
.
About the only compiler that's happy with void main
is Microsoft’s, Visual C++.
That's a compiler defect, but since Microsoft has lots of example documentation and even code generation tools that generate void main
, they will likely never fix it. However, consider that writing Microsoft-specific void main
is one character more to type than standard int main
. So why not go with the standards?
Cheers & hth.,
A segmentation fault occurs when a process tries to overwrite a page in memory which it doesn't own; Unless you run a long way over the end of you're buffer you aren't going to trigger a seg fault.
The stack is located somewhere in one of the blocks of memory owned by your application. In this instance you have just been lucky if you haven't overwritten something important. You have overwritten perhaps some unused memory. If you were a bit more unlucky you might have overwritten the stack frame of another function on the stack.
So apparently when you're asking the computer for a certain amount of bytes to allocate in memory, say: char array[10] it gives us a few extra bytes in order not to bump into segfault, however it's still not safe to use those, and trying to reach further memory will eventually cause the program to crash.
精彩评论