The prefetch instruction
It appears the general logic for prefetch usage is that prefetch can be added, provided the code is busy in processing until the prefetch instruction completes its operation. But, it seems that if too much of prefetch instructions are used, then it would impact the performance of the system. I find that we need to first have the working code without prefetch instruction. Later we need to various combination of prefetch instruction in various locations of code and do analysis to determine the code locations that could actually improve because of prefetch. Is there any better way to determine the exact locations in w开发者_如何学运维hich the prefetch instruction should be used ?
In the majority of cases prefetch instructions are of little or no benefit, and can even be counter-productive in some cases. Most modern CPUs have an automatic prefetch mechanism which works well enough that adding software prefetch hints achieves little, or even interferes with automatic prefetch, and can actually reduce performance.
In some rare cases, such as when you are streaming large blocks of data on which you are doing very little actual processing, you may manage to hide some latency with software-initiated prefetching, but it's very hard to get it right - you need to start the prefetch several hundred cycles before you are going to be using the data - do it too late and you still get a cache miss, do it too early and your data may get evicted from cache before you are ready to use it. Often this will put the prefetch in some unrelated part of the code, which is bad for modularity and software maintenance. Worse still, if your architecture changes (new CPU, different clock speed, etc), such that DRAM access latency increases or decreases, you may need to move your prefetch instructions to another part of the code to keep them effective.
Anyway, if you feel you really must use prefetch, I recommend #ifdefs around any prefetch instructions so that you can compile your code with and without prefetch and see if it is actually helping (or hindering) performance, e.g.
#ifdef USE_PREFETCH
// prefetch instruction(s)
#endif
In general though, I would recommend leaving software prefetch on the back burner as a last resort micro-optimisation after you've done all the more productive and obvious stuff.
To even consider prefetching code performance must already be an issue.
1: use a code profiler. Trying to use prefetch without a profiler is a waste of time.
2: whenever you find an instruction in a critical place that is anomalously slow, you have a candidate for a prefetch. Often the actual problem is on the memory access on the line before the slow one, rather than the slow one as indicated by the profiler. Work out what memory access is causing the problem (not always easy) and prefetch it.
3 Run your profiler again and see if it made any difference. If it didn't take it out. On occasion I have sped up loops by >300% this way. It's generally most effective if you have a loop accessing memory in a non-sequential way.
I Disagree completely about it being less useful on modern CPU's, I have found completely the opposite, though on older CPU's prefetching about 100 instructions was optimal, these day's I'd put that number more like 500.
Sure, you have to experimate a bit, but not that you need to fetch somme houndred cycles (100-300) before the data is needed. The L2 cache is big enougth that the prefetched data can stay there a while.
This prefetching is very efficient in front of a a loop (a few houndred cycles of course), especialy if it is the inner loop and the loop is started thousand and more times per secound.
Also for ur ow fast LL implementation or a Tree-implementation could prefetching gain an measurable advantage because the CPU don't know jet that the data is needed soon.
But remember that the prefetching instruction eat some decoder/queue bandwidth so overusing them hurts performance because of that reason.
精彩评论