开发者

Control flow for CUDA program

Below is a small program that i wrote to see how race conditions can happen in CUDA but i was surprised by the output.

#include<cutil.h>
#include<iostream>
__global__ void testLocal(int *something, int val[]){

 *something = *something/2;


 val[threadIdx.x] = *something;
}

void main(){

    int *a, *c;
    int r =16;

    cudaMalloc((void**)&a, 4*sizeof(int));
    cudaMalloc((void**)&c, sizeof(int));
    cudaMemcpy(c, &r, sizeof(int) , cudaMemcpyHostToDevice);
    testLocal<<<1,4>>>(c,a);
    int *b = (int *)malloc(4 * sizeof(int));
    cudaMemcpy(b,a, 4 * sizeof(int), cudaMemcpyDeviceToHost);

    for( int  j =0 ; j< 4; j++){
        printf("%d\n",b[j]);

    }
    getchar();


}

As i 开发者_JS百科am launching 4 threads, I expected each thread to divide *something by 2 once. I understand that the order in which they would divide *something is not fixed. Thus, when I tried to print the values, I expected that one of the printed values would be 8 , one would be 4, one would be 2, and one would be 1. However, all the printed values were 8. Why is this? Shouldn't all threads divide *something once.


What you are looking at is undefined behaviour. Because you are launching a single block with 4 threads, all the threads are executing in the same warp. This means that

 *something = *something/2;

is being executed simultaneously by all the threads you have launched. The CUDA programming model only guarantees that when multiple threads from the same warp attempt to write to the same memory location, one of the writes will succeed. It says nothing about which thread will succeed, and what will happen to the other threads in the warp that don't "win". To get the behaviour you are expecting would require serialised memory access -- this is only possible through the use of atomic memory access primitives on those architectures which support them.


Should is a strong word. What you're doing is unspecified, so it should not do anything specific.

Now, what it likely does is run the 4 threads on the same compute unit, within the same warp. (the "SIMT" model makes each thread run as part of a warp). Since your operation on something is not atomic, all threads within the warp read and write memory in lock-step. So the 4 threads read *something together, then all divide the result by 2, and all try to write 8 to the memory.

What you were expecting, that *something be read and written atomically is achieved through atomic operations, though there are no atomic divide or multiply available in CUDA. So if you really want this, you need to write your own (with the help of atomicCAS). And you'll start seeing your performance drop dramatically, as you're now forcing threads that try hard to run in parallel to run serially.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜