Is there a better/more efficient way to store then large arrays
I'm opening a text file which can hold anywhere between 100 and 50,000 dataFrames, and assigning each line of the txt file to a dataFrame, where a dataFrame is defined as follows:
typedef struct {
double x;
double y;
double z;
double azimuth;
double elevation;
double roll;
} dataFrame;
I need the data to be accessible so tha开发者_StackOverflow中文版t I can plot it in a graph in qwt, (which means I'll need to create various other arrays from the data as well for instantaneous velocity etc.) but I'm a bit worried since I don't know how much this is gonna slow down the system. Currently, I read the number of lines, and then have
dataFrame* left;
dataFrame* right;
left = new dataFrame[lineCount/2];
right= new dataFrame[lineCount/2];
and then procede to fill it up as the data is read from the txt file.
If you only need fast indexing and you know the number of elements (lineCount
) up front, nothing beats a good old array.*
If you also want fast appending, use a dynamic array such as std::vector
or QVector
.
If you want fast searching for an item by key, check out std::set
, std::map
, QSet
, QHash
.
[*] Almost nothing. Post your alternatives in the comments.
Use a std::list<dataFrame>
. It doesn't attempt to allocate contiguous memory. If you are looking for more background info, try to find a brief summary of basic data structures (array, linked list, various trees) and their tradeoffs in memory/access/deletion/insertion performance.
My guess is that your concern is memory usage since you're talking about a large dataset. I don't think there is a much more efficient way (memory wise) to store it than an array. The overhead per dataFrame
is only the pointer to it. Unless you're using the stack that overhead is unavoidable and you shouldn't use the stack for large datasets due to its limited size.
精彩评论