CUDA: question about the active warps (active blocks) and how to choose the block size
Suppose a CUDA GPU can have 48 simultaneously active warps on one multiprocessor, that is 48 blocks of one warp, or 24 blocks of 2 warp, ..., since all the active warps from multiple blocks are scheduled for execution, it seems the size of the block is not important for the occupancy of the GPU (of course it should be multiple of 32), whether 32, 64, or 128 make no difference, right? So the size of the block is just determined by the computation task and the resource limit开发者_Go百科 (shared memory or registers)?
There are multiple factors worth considering, that you ommit.
- There is a limit on the number of active blocks on a SM. Current limit is 8 (all devices), so if you want to achieve full occupancy, your blocks shouldn't be smaller than: 3-warps (devices 1.0, 1.1), 4-warps (1.2, 1.3), and 6-warps (2.x)
- Depending on the device, there are 8K, 16K or 32K registers available per multiprocessor. The bigger your blocks, the bigger "granularity" of how many registers the block needs. For big blocks, if full occupancy cannot be achieved, you loose a lot. For smaller blocks, the loss may be smaller. That's why personally, I prefer for example 2x256 rather than 1x512.
- If you do need synchronisation between warps in a block, bigger blocks allow you to have wider synchronisation.
- Single block is guaranteed to be scheduled on a single multiprocessor. If all its warps have some common data (e.g. control variables), you can reduce the number of global memory fetches. On the other hand, when you create lots of small blocks, each of them might need to load the same data separately. On Fermi, which has some caches, it is not as important as on GF-200 series. Keep in mind, however, that since there are so many multiprocessors, 1MB L2 cache is still very, very small!
No. The blocksize does matter.
If you have a blocksize of 32 threads you have a very low occupancy. If you have a blocksize of 256 you have a high occupancy. That means that all the 256 are concurrently active. More than 256 threads / block would rarely make some difference.
As the architecture involved is complex, testing it with your software is always the best approach.
精彩评论