Being Smart About Vector Memory Allocation
Let's say I have to iterate over a potentially very large vector of numbers and copy the even and odd elements into new, separate vectors. (The source vector may have any proportion of evens to odds; it could be all evens, all odds, or somewhere in-between.)
For simplicity, push_back
is often used for this sort of thing:
for (std::size_t Index; Index < Source.size(); Index++)
{
if (Source[Index] % 2) Odds.push_back(Source[Index]);
else Evens.push_back(Source[Index]);
}
However, I'm worried that this will be inef开发者_开发知识库ficient and be harmful if it's used as part of the implementation for something like a sorting algorithm, where performance is paramount. QuickSort, for example, involves separating elements much like this.
You could use reserve()
to allocate memory before-hand so only one allocation is needed, but then you have to iterate over the entire source vector twice - once to count how many elements will need to be sorted out, and once more for the actual copying.
You could, of course, allocate the same amount of space as the source vector's size, since neither new vector will need to hold more than that, but that seems somewhat wasteful.
Is there a better method that I'm missing? Is push_back()
usually trusted to manage this sort of thing for the programmer, or can it become burdensome for sensitive algorithms?
I'm going to answer the question I think you really meant to ask, which is "should push_back()
be avoided in the inner loops of heavy algorithms?" rather than what others seem to have read into your post, which is "does it matter if I call push_back before doing an unrelated sort on a large vector?" Also, I'm going to answer from my experience rather than spend time chasing down citations and peer-reviewed articles.
Your example is basically doing two things that add up to the total CPU cost: it's reading and operating on elements in the input vector, and then it has to insert the elements into the output vector. You're concerned about the cost of inserting elements because:
- push_back() is constant time (instantaneous, really) when a vector has enough space pre-reserved for an additional element, but slow when you've run out of reserved space.
- Allocating memory is costly (
malloc()
is just slow, even when pedants pretend thatnew
is something different) - Copying a vector's data from one region to another after reallocation is also slow: when push_back() finds it hasn't got enough space, it has to go and allocate a bigger vector, then copy all the elements. (In theory, for vectors that are many OS pages in size, a magic implementation of the STL could use the VMM to move them around in the virtual address space without copying — in practice I've never seen one that could.)
- Over-allocating the output vectors causes problems: it causes fragmentation, making future allocations slower; it burns data cache, making everything slower; if persistent, it ties up scarce free memory, leading to disk paging on a PC and a crash on embedded platforms.
- Under-allocating the output vectors causes problems because reallocating a vector is an O(n) operation, so reallocating it m times is O(m×n). If the STL's default allocator uses exponential reallocation (making the vector's reserve twice its previous size every time you realloc), that makes your linear algorithm O(n + n log m).
Your instinct, therefore, is correct: always pre-reserve space for your vectors where possible, not because push_back is slow, but because it can trigger a reallocation that is slow. Also, if you look at the implementation of shrink_to_fit
, you'll see it also does a copy reallocation, temporarily doubling your memory cost and causing further fragmentation.
Your problem here is that you don't always know exactly how much space you'll need for the output vectors; the usual response is to use a heuristic and maybe a custom allocator. Reserve n/2+k of the input size for each of your output vectors by default, where k is some safety margin. That way you'll usually have enough space for the output, so long as your input is reasonably balanced, and push_back can reallocate in the rare cases where it's not. If you find that push_back's exponential behavior is wasting too much memory ( causing you to reserve 2n elements when really you just needed n+2 ), you can give it a custom allocator that expands the vector size in smaller, linear chunks — but of course that will be much slower in cases where the vectors are really unbalanced and you end up doing lots of resizes.
There's no way to always reserve the exact right amount of space without walking the input elements in advance; but if you know what the balance usually looks like, you can use a heuristic to make a good guess at it for a statistical performance gain over many iterations.
You could, of course, allocate the same amount of space as the source vector's size, since neither new vector will need to hold more than that, but that seems somewhat wasteful.
Then follow it up with a call to shrink_to_fit
However, I'm worried that this will be inefficient and harm things like sorting algorithms. ... Is push_back() usually trusted to manage this sort of thing for the programmer, or can it become burdensome for sensitive algorithms?
Yes, push_back is trusted. Although honestly I don't understand what your concern is. Presumably, if you're using algorithms on the vector, you've already put the elements into the vector. What kind of algorithm are you talking about where it would matter how the vector elements got there, be it push_back
or something else?
How about sorting the original vector with a custom predicate that puts all the evens before all the odds?
bool EvenBeforeOdd(int a, int b)
{
if ((a - b) % 2 == 0) return a < b;
return a % 2 == 0;
}
std::sort(v.begin(), v.end(), EvenBeforeOdd);
Then you just have to find the largest even number, which you can do e.g. with upper_bound
for a very large even number or something like that. Once you found that, you can make very cheap copies of the ranges.
Update: As @Blastfurnace commented, it's much more efficient to use std::partition
rather than sort
, since we don't actually need the elements ordered within each partition:
bool isEven(int a) { return 0 == a % 2; }
std::vector<int>::const_iterator it = std::partition(v.begin(), v.end(), isEven);
std::vector<int> evens, odds;
evens.reserve(std::distance(v.begin(), it);
odds.reserve(std::distance(it, v.end());
std::copy(v.begin(), it, std::back_inserter(evens));
std::copy(it, v.end(), std::back_inserter(odds));
If your objects are created dynamically then the vectors are literally just storing pointers. This makes the vectors considerably more efficient, especially when it comes to internal reallocation. This would also save memory if same objects exist in multiple locations.
std::vector<YourObject*> Evens;
Note: Do not push pointers from context of function as this will cause data corruption outside of that frame. Instead objects would need to be allocated dynamically.
This might not solve your problem, but perhaps it is of use.
If your sub vectors are exactly half (odd / even) then simply allocate 50% of original vector for each. This would avoid wastage and shrink_to_fit
.
精彩评论