cuda multiple memory access
Please give me some explanation how a memory access works in the following kernel:
__global__ void kernel(float4 *a)
{
int tid = blockIdx.x * blockDim.x + threadIdx.x;
float4 reg1, reg2;
reg1 = a[tid]; //each thread reads a unique memory location
for(int i = 0; i < totalThreadsNumber; i++)
{
reg2 = a[i]; //all running threads start reading
//the sa开发者_StackOverflow社区me global memory location
//some computations
}
for(int i = 0; i < totalThreadsNumber; i++)
{
a[i] = reg1; // all running threads start writing
//to the same global memory location
//race condition
}
}
How does it work in the first loop ? Is there some serialization ? I assume that the second loop causes threads serialization (only within a warp ?) and the result is undefined.
Keeping my explanation to Fermi (sm_2x), on older hardware memory access are per half-warp instead.
In the first loop (reading) the whole warp is reading from the same address into a local variable. This results in a "broadcast". Since Fermi has a L1 cache either one cache line will be loaded or the data will be fetched directly from the cache (for subsequent iterations). In other words, there is no serialisation.
In the second loop (writing) which thread wins is undefined - just like any multi-threaded programming model if multiple threads write to the same location the programmer is responsible for understanding the race conditions. You have no control over which warp in the block will execute last and also no control over which thread within the last warp will complete the write, so you can't predict what the final value will be.
精彩评论