Large dynamic array in c++
Short problem:
#include <iostream>
using namespace std;
int main()
{
double **T;
long int L_size;
long int R_size = 100000;
long int i,j;
cout << "enter L_size:";
cin >> L_size;
cin.clear();
cin.ignore(100,'\n');
cout << L_size*R_size << endl;
cout << sizeof(double)*L_size*R_size << endl;
T = new double *[L_size];
for (i=0;i<L_size;i++)
{
T[i] = new double[R_size];
}
cout << "press enter to fill array" << endl;
getchar();
for (i=0;i<L_size;i++)
{
for (j=0;j<R_size;j++)
{
T[i][j] = 10.0;
}
}
cout << "allocated" << endl;
for (i=0;i<L_size;i++)
{
delete[] T[i];
}
delete [] T;
cout << "press enter to close" << endl;
getchar();
return 0;
}
with 2GB of RAM (on 32bit OS) I can't make it work with L_size = 3000
which is pretty obvious since it would need approx. 2.4GB.
But when I start 2 copies of above program each with L_size = 1500
it works - really slow but finally both returns allocated
in console.
So the question is - how is that possible? Is it related to virtual memory?
It is possible to have one big array stored in virtual memory whil开发者_如何学运维e operating on another - within one program?
Thx.
Yes. The operating system will allow you to allocate up to 2GB of RAM per process. When you start two copies, it will let this grow using virtual memory, which will be very, very slow (since it's using the hard drive), but still function.
Yes, it's virtual memory. With L_size = 1500, you can start the first instance and it will be able to allocate the required memory. When you start the second instance, the memory allocated by the first instance is paged out to disk, making room in RAM for the second instance.
The amount of RAM you can allocate at any one time and in any one process depends not only on the amount of physical RAM and virtual memory (page file size) you have available, but also on the width of your memory addresses. On a 64-bit machine, you will be able to allocate much more memory in a single process than you would be able to on a 32-bit machine.
精彩评论