Can't allocate 2-4 Gb of RAM with new[]/ C++/Linux/ x86_64
For this easy test, and the linux box with 4Gb or RAM, 0byte of swap and CPU in x86_64 mode, I can't allocate more than 1 Gb of array.
Source:
#include <cstdio>
int main()
{
for(int i=0;i<33;i++) {
char*a=new char[1<<i];
*a=1;
delete[]a;
printf("%d\n",i);
fflush(stdout);
}
}
Run:
$ file test
test: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV)
$ ./test
...
24
25
26
27
28
29
30
terminate called after throwing an instance of 'std::bad_alloc'
what(): St9bad_alloc
Aborted
There is no ulimit for memory:
virtual memory (kbytes, -v) unlimited
data seg size (kbytes, -d) unlimited
Why the error?
Glibc is 2.3.4, kernel is 2.6.9
UPDATE: Compiler is gcc4.1
Thanks! The test definitely has a error, 1ull<<i
gives me up to 31 (2gb). This error was unintended. But the real failed code is
for(j=0;j<2;j++)
for(i=0;i<25;i++)
some_array[j][i] = new int[1<<24];
so there is no sign overflow in the real code.
Size of int is 4 byte:
$ echo 'main(){return sizeof(int);}'| gcc -x c - && ./a.out; echo $?
4
the every request will be for 1<<24 * 4 = 1<<26; total memory required is 2*25*(1<<26) 3355443200 bytes + 50*sizeof(pointer) for开发者_Python百科 some_array + 50*(size of new[] overhead).
A naked constant in C is an int. A signed int.
So 1 << 31
is -2147483648.
because
1<<31
= 0x10000000
= -2147483648
Try (size_t)1 << i
EDIT: I see in other answers that the issue is most probably related with the number passed to new[]
becoming negative. I agree that that is most probably the case, I am leaving this answer only because I think that it contains information that might be relevant in some similar cases, where the issue is not with calling new[]
with a negative number.
The first question that comes to mind is whether you have enough available memory. With 4Gb RAM and no swap the total amount of memory that can be allocated to all processes and the kernel is 4Gb.
Note that even if you had more than 1Gb of memory available for the process, malloc
and free
(that are called underneath by new[]
and delete[]
might not give the memory back to the system, and they might in fact keep each one of the acquired/released blocks, so that the memory footprint of your program might go as high as 2Gb (would have to check this with the malloc implementation in your kernel, as many implementations do give back big blocks).
Finally, when you request an array of 1Gb you are requesting 1Gb of contiguous memory, and it might just be the case that you have much more memory but none of the blocks is large enough for that particular request.
What are the values of /proc/sys/vm/overcommit_memory
and /proc/sys/vm/overcommit_ratio
on your system? If you have memory overcommitting turned off, you may not be able to allocate all the memory on your system. With overcommit turned on (set /proc/sys/vm/overcommit_memory
to 0) then you should be able to allocate essentially unlimited size arrays (certainly 10s of GB) on a 64-bit system.
Although is generally true that on 64bit machine you have plenty of address space to allocate several GB of continuous virtual memory, you are trying to allocate it using new/malloc. New/malloc are traditionally not requests for any memory, but for a specific part of the memory which is allocated using the {s,}brk system call which basically moves the end of the process data segment. I think that you should allocate such large amount of memory using mmap which leaves the OS free to choose any address block.
精彩评论