CUDA threading allocation
I have gone through the CUDA programming guide and I cannot understand the t开发者_Go百科hread allocation method shown below:
dim3 dimGrid( 2, 2, 1 );
dim3 dimBlock( 4, 2, 2 );
KernelFunction<<< dimGrid, dimBlock >>>(. . .);
Can some explain how threads are allocated for the above condition?
An intuitive way to think about grid and block is to visualize them:
- Grid: A grid is a lattice of horizontal and vertical lines. Thus it has only 2 dimensions.
- Block: Think of a block of wood. It has all 3 dimensions: length, width and height.
- A block is made up of threads.
- A grid is made up of blocks.
Your dimBlock( 4, 2, 2 )
means that each block has 4 x 2 x 2 = 16
threads.
Your dimGrid( 2, 2, 1 )
means that the grid has 2 x 2 x 1 = 4
blocks.
Thus, your kernel is launched on a grid of 4 blocks, where each block has 16 threads. To conclude, your kernel will be launched with 16 x 4 = 64
threads.
精彩评论