开发者

Shouldn't this code crash

int *p;
while(true)
{
 p = new int;
}

Due to running out of memory space, shouldn't this code crash. I have tried printing o开发者_高级运维ut the value of p, that is the address of memory located for p, and it seems to increase yet there is no crashing.

Why is this so?


This solution is like trying to crash a car at a telephone pole down the street while driving 1MPH. It will happen eventually but if you want fast results you need to up the speed a bit.

int *p;
while (true) { 
  p = new int[1024*1024*1024];
}

My answer though is predicated on your code base using the standard STL allocator which throws on a failed memory allocation. There are other available allocators which simply return NULL on a failed allocation. That type of allocator will never crash in this code as the failed allocation is not considered fatal. You'd have to check the return of the allocation for NULL in order to detect the error.

One other caveat, if you're running on a 64 bit system this could take considerably longer to crash due to the increased size of the address space. Instead of the telephone poll being down the street, it's across the country.


Looks like you won't close this question until you see a crash :)

On Unix like operating system you can restrict the amount of virtual memory available to a process using the ulimit command. By setting the VM to 1MB I was able to see the desired results in about 5 seconds:

$ ulimit -v $((1024*1024)) # set max VM available to process to 1 MB

$ ulimit -v                # check it.
1048576

$ time ./a.out             # time your executable.
terminate called after throwing an instance of 'St9bad_alloc'
  what():  std::bad_alloc
Aborted

real    0m5.502s
user    0m4.240s
sys  0m1.108s


Sseveral allocations of small chunks of memory is slower than one big allocation. For instance, it will take more time to allocate 4 bytes 1 million times, than 1 million bytes 4 times.

Try to allocate bigger chunks of memory at once:

int *p;
while(true)
{
 p = new int[1024*1024];
}


You did not wait long enough.

[If you want to see how it's progressing, add some output on every 1000000 loops, or something like that. It will fail eventually.]


I'm thinking that the compiler is doing some optimization and does not really allocate memory. I ran the above examples (using valgrind) some times and saw that even though there where allocations, all where of 0 bytes memory.

Try this and your program will get killed (at least on unix):

#include <cstdio>

int main() {

int *p;

while (true) {
        p = new int[1024*1024];
        p[0] = 0;
        printf("%p\n",p);
        for(unsigned j = 1; j < 1024*1024; j++) {
                p[j] = p[j-1] + 1;
        }
}

return 0;
}

You see the only difference is that I use the allocated memory.

So, it doesn't really crash because I guess the OS is preventing the process from crashing by just killing it (Not 100% sure about this)


Another possibility: most OSes will perform optimistic allocation. When you malloc, say, 500 MB, that memory may not actually be reserved until your program actually attempts to read or write to it.

Linux in particular is notorious for over-promising memory and then relying on the OOM (out-of-memory) killer to whack processes that try to collect on the kernel's promises.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜