C++ Vector at/[] operator speed
In order to give functions the option to modify the vector I can't do
curr = myvec.at( i );
doThis( curr );
doThat( curr );
doStuffWith( curr );
But I have to do:
doThis( myvec.at( i ) );
doThat( myvec.at( i ) );
doStuffWith( myvec.at( i ) );
(as the answers of my other question pointed out)
I'm going to make a hell lot of calls to
myvec.at()
then. How fast is it, compared to开发者_C百科 the first example using a variable to store the result?Is there a different option for me? Can I somehow use pointers?
When it's getting serious there will be thousands of calls to myvec.at()
per second. So every little performance-eater is important.
You can use a reference:
int &curr = myvec.at(i);
// do stuff with curr
The at
member function does bounds checking to make sure the argument is within the size of the vector
. Profiling is only way to know exactly how much slower it is compared to operator[]
. Using a reference here allows you to do the lookup once and then use the result in other places. And you can make it a reference-to-const
if you want to protect yourself from accidentally changing the value.
From my own tests with similar code (compiled under gcc and Linux), operator[]
can be noticeably faster than at
, not because of the bounds checking, but because of the overhead of exception handling. Replacing at
(which throws an exception on out-of-bounds) with my own bounds checking that raised an assert on out-of-bounds gave a measurable improvement.
Using a reference, as Kristo said, lets you only incur the bounds checking overhead once.
Ignoring bounds checking and exception handling overhead, both operator[]
and at
should be optimized to equivalent to direct array access or direct access via pointer.
As Chris Becke said, though, there's no substitute for profiling.
When performance is an issue, there is no substitute for profiling. The optimization capabilities of compilers change from version to version, and tiny, insignificant alterations to source code can dramatically change the resulting performace.
No one can answer this question but yourself: Create a test harness, and throw several algorithms at it and see what you get.
ps. if performance really is an issue, well, i got a factor 10 speed increase out of a png decoder by removing the vectors and replacing them with raw arrays. Again, this was for Visual Studio 6. I am not claiming that a raw array substitution will give you a factor 10 improvement, but it is something to try.
The reason the first doesn't work is that you're not setting a pointer or iterator to the address of the ith variable. Instead you're setting curr equal to the value of the ith variable and then modifying curr. I'm assuming that doThis and doThat are references.
Do this:
MyObject& curr = myvec.at( i );
Operator[] might be faster than at
, because it isn't required to do bounds checking.
You can make curr
a reference to do what you want.
MyClass & curr = myvec.at(i);
You might also do some benchmarking before getting worried. Modern processors can handle thousands of operations per second quite easily.
Options that I see, in roughly inverse order of preference:
- Store pointers in your container instead of the actual objects. This may be advisable anyway, if the objects are complex enough that copying them around is problematic.
- Use the indexing operator
[]
instead ofat()
. - Just call
at()
once, and save it into a reference (see Kristo's answer above). - Forget about it until you actually have a problem with excessive runtime. If that happens, profile your code first to make sure the bottleneck is here, and only then worry about doing one of the above to speed things up.
Honestly, what you should do is play with the four different approaches, and just use the one that produces the easiest to understand code. In most cases we are happy to sacrifice a few machine cycles for code that is easier for human beings to maintain.
The complexity of at()
is constant, i.e., in practice this means that it must be designed to not have any relevant performance penalty.
You can use []
, which is also constant complexity, but does not check bounds. This would be equivalent to using pointer arithmetic and, thus, potentially a bit faster than the former.
In any case, vector is specifically designed for constant performance access to any of its elements. So this should be the least of your worries.
Vectors are the most suited for access speed. Access to a random element in a vector is of complexity O(1) compared with O(n) for general linked-lists and O(log n) for link-trees.
However this question is mis-placed as the answer to your other question has lead you astray by not explaining how to fix your original problem by using a reference.
If it is the case, that you load up a vector, then process it without adding or deleting any more elements, then consider getting a pointer to the underlying array, and using array operations on that to 'avoid the vector overhead'.
If you are adding or deleting elements as part of your processing, then this is not safe to do, as the underlying array may be moved at any point by the vector itself.
精彩评论