stl vector vs array
As stated in the answer to my question, I will use vectors just to resize to N, read and write the nth element where n < N. If n is too close to N then I will create another vector of size N+M and copy all the elements from 1st vector to the 2nd and delete the 1st one. So if I am doing the memory management and no insertion and deletion take place, are there any advantages of using a vector instead of an array particularly for this case?
P.S. Resizing and block copying will be needed rarely.
EDIT: As David Rodríguez - dribeas demanded, it is a technical analysis program. Historical stock prices are kept as OHLC bars in vectors. So I really need to store the elements in vectors. Also there are some other calculation classes called indicators, do calculations based on prices of stocks. When a new price arrived via tcp, first, the stock updates its bars and immediately calls all of its related indicators' calculate methods, saying, "ok guys my nth bar has been updated at this spesific time. Go calculate yourself." All operations are task based, that is, a stock never updates itself before finishing the last update and similarly an indicator never do a calculation while last one is going on. One task at a time. And if new updates keep coming too quickly, they can be cached as tasks. So a stock can wait for its last update to finish and an indicator can similarly store its tasks while its been calculated, BUT a stock must not wait for its indicators to finish their work. This is where the problem begins. If an update arrives, a stock firstly looks its ba开发者_如何学Cr vector size and checks if it must be resized. If needed it resizes its vectors, while there may be some indicators still working prior to the previous update. An indicator may reach its stock data, as the data may being resized. Up to now I do not have any problems because indicator calculations have been done very very quickly. But I am concerned. As a solution a stock can generate a second larger bar vector and tell its indicators they can reach to the second vector for upcoming calculations. Eventually, after a couple of second, all the access to the first vector perishes and it can be deleted grafully.
It sounds like the structure you really want is std::deque
which can be appended in ammortized constant time.
Actually, the resizing strategy is exactly the one already used by std::vector
. In fact the precise strategy it uses means that this operation is essentially O(1), but only in terms of many appends over a long period of time. At any rate, there doesn't seem to be any reason to reinvent the wheel on this. just use std::vector.push_back()
or std::queue.push_back()
- Why you just not resize existing vector?
- If you can't resize existing vector then you can use array for it i don't see any pros for using
std:vector
in this case, for copy you can usememcopy
- You can use array and resize her with
realloc
.
I read the original question, and I think that it might be worth handling the multithreading so that you have a guarantee that no thread will be accessing the container when you grow it.
On the particular question at hand, unless there is a strong reason not to do it, I would always go for an std::vector
over an array, if for no other reason just because many operations are already implemented and you do not need to manually manage the memory.
There are anyway, a lot of information that you have not provided in either question, as to what the usage pattern is, what the consumer threads are doing with the data (do you really need to keep the elements in the vector? could the threads just pick a set of values and work with that?), what synchronization mechanisms are in place (how do consumers get notified that there is extra data in the container, when does extra data arrive..) All this questions together with their answers might provide hints to other designs.
I'd say that in your case, at worst, the vector's going to be no better than a dynamically allocated raw array. Probably, it'll be more convenient.
As for your dilemma, this sounds like something that would be way easier with implicit memory management. You could store your incoming data in shared pointers. Then, indicators grab a shared pointer to the data they need whenever they start calculating. If new data comes in that would require invalidating the old data, just replace the shared pointer in the stock containing the old data with a shared pointer containing the new data. Any indicator still using the old data could continue using it because it still has a shared pointer to the old data (preventing the old data from being cleaned up), and the old data would get cleaned up when all the indicators using the data finished and reset their shared pointers.
精彩评论