Is any optimization done if one run the same kernel with the same input again and again?
If I run the same kernel with the same input several times, like this
#define N 2000
for(int i = 0; i < 2000; i++) {
mykernel<<<1,120>>>(...);
}
what happens? I timed it and played around with N
: halving N
(to 1000), halved the time it took.
Yet I'm bit cautious to belive that it just runs the kernel 2000 tim开发者_运维技巧es because the speed up from the non-CUDA code is so dramatic (~900 sec to ~0.9 sec). So what kind of optimization does CUDA do in this case? Caching the results?
Setting CUDA_LAUNCH_BLOCKING=1
didn't change nothing.
mykernel
replaces an inner loop in the non-CUDA code.
Hardware is GeForce GTX 260
CUDA doesn't do any optimization of any kind, or any caching of the results. If you launch 2000 kernels, it runs 2000 kernels.
However, kernel launches are asynchronous, so measuring the time taken to launch 2000 kernel instances in a loop isn't the same as the total execution time of those 2000 kernel instances. It is probably that what you are seeing is an artifact of incorrect time measurement and not true speed-up.
It's believable. I've had a kernel that was a 1600x improvement over optimized CPU code. I don't think there's actual caching of results.
Note that the first time you spin up CUDA, the timings can vary a bit. Thus, doing 1 kernel run may not be exactly 1/1000 the time for 1000 kernel runs. For large numbers, it's linear like you observed.
精彩评论