Cache Optimization of C++ Code
This might seem kind of open-ended, but I'm having tro开发者_StackOverflow中文版uble with optimizing a piece of C++ code for multiple processors and the cache.
More important than multiple processors is the cache: I am iterating over 2 nested loops
for(int i=0; i<n; i++){
//do a little something here with a single array
for(int j=0; j<whoaAnotherArray[n].size(); j++){
* access array[i][j] and otherArray[i][j] and store in a variable
- an example is: "int x = array[i][j] + otherArray[i][j]"
* compare variable to some other array[index calculated from i and j]
- an example is: "if (x < yetAnotherArray[i*n+j]){ //do something to yetAnotherArray }"
}
}
My arrays (array and otherArray) are of very large size. n is their size.
Is there a way to make this more cache friendly? I already switched from using linked-lists, which are terrible for cache. I read somewhere that my access order [i][j] is also cache efficient.
FWIW, this is part of a negative-weight cycle detection algorithm.
I was thinking maybe since my arrays are so huge (they're arrays of integers btw), would it be better to break them up a bit so that they fit in the cache better? But I'm not sure if that's right or if it is, how to go about it.
I've also started to use openmp. The only thing I've been doing is adding
#pragma omp parallel for
before the right for loops, and I'm getting decent utilization. I'd like to learn how to better use parallelism but beyond for loops in my code I am not sure what I can do. And all the time: I am trying to be cache friendly.
One possibility for cache-use improvement is to modify your pattern of access to array
and otherArray
. When you read array[i][j]
your machine will, of course, move a 'line' of memory into cache. When you read otherArray[i][j]
your machine will, of course, move a 'line' of memory into cache. It is possible that to read the second 'line' the first has to be flushed from cache into RAM. And then you make the situation even worse (potentially) by reading a value from yetAnotherArray
.
What actually happens depends a lot on what else is going on at the same time, what else is in cache and any other operations being carried out. This can be very difficult to figure out.
If your (dominant) pattern of array access is to require element[i][j]
from both (or all 3) arrays at the same time then you want to arrange matters such that they are in the same 'line' of memory which is read. One way to do this is to merge the 3 arrays into a single m*n*3
array, in which superArray[i][j][1]
is next to superArray[i][j][2]
which is next to superArray[i][j][3]
and in which the 3 planes of the array each represent one of the original arrays. Of course, this only works if I've got the index ordering right, so give it more thought than I have.
Finally:
this may transform your elegant program into a spaghetti mess -- but that's a small price to pay for improved speed !
by 'line' I mean whatever chunk your platform loads from RAM to cache in one go.
- Google around for loop tiling and strip mining. Compilers are not terrifically good at this yet and any help you can provide should be rewarded in improved execution speed.
Read these 2 articles by Herb Sutter especially the first one
http://www.ddj.com/go-parallel/article/showArticle.jhtml?articleID=217500206
http://ddj.com/architect/208200273
There's a program called Cachegrind (a Valgrind plugin) that can help you profile how your code performs against a virtual cache. I'd work with that to see how your code does against your CPU's cache. (It's been a while since I've used it, so I don't remember if it can detect your CPU's cache attributes automatically. You might need to give it the exact cache parameters for your CPU.)
You could also try a few optimizations that, ideally, your compiler is or should be doing:
1) Replace this line:
for(int j=0; j<whoaAnotherArray[n].size(); j++){
with:
int len = whoaAnotherArray[n].size();
for(int j=0; j<len; j++){
2) Create pointers into the arrays in your outer loop:
int* pArray = array[i] - 1;
int* pOtherArray = pOtherArray[j] - 1;
and use preincrements on the first pointer access in the loop:
int x = *(++pArray) + *(++pOtherArray);
(Yes, I know that's ugly. I know the compiler should be doing this for you. But not too many months ago, I found that this did make a difference with gcc 4.3(?) on linux. YMMV.)
3) If there's any way to restructure the code so that you loop through array
in one pass, and then loop through otherArray
in a second pass, then try doing that. Seems unlikely in your case, but I don't know. The point is, you want to keep memory accesses as focused as possible to one array at a time.
Good luck.
精彩评论