开发者

CUDA combining thread independent(??) variables during execution

Guys I apologize if the title is confusing. I though long and hard and couldn't come up with proper way to phrase the question in a single line. So here's more detail. I am doing a basic image subtraction where the second image has been modified and I need to find the ratio of how much change was done to the image. for this I used the following code. Both images are 128x1024.

for(int i = 0; i < 128; i++)
{
    for(int j = 0; j < 1024; j++)
    {
        den++;
        diff[i * 1024 + j] = orig[i * 1024 + j] - modified[i * 1024 + j];
        if(diff[i * 1024 + j] < error)
        {
            num++;
        }
    }
}
ratio = num/den;

The above code works fine on the CPU but I want to try to do this on CUDA. For this I can setup CUDA to do the basic subtraction of the images (code below) but I can't figure out how to do the conditional if statement to get my ratio out.

__global__ void calcRatio(float *orig, float *modified, int size, float *result)
{
    int index = threadIdx.x + blockIdx.x * blockDim.x;
    if(index < size)
        result[index] = orig[index] - modified[index];
}

So, up to this point it works but I cannot figure out how to parrallelize the num and den counters in each thread to calculate the ratio at the end of all the thread executions. To me it feels like the num and den counders are independent of the threads as every time I have tried to use them it seems they get incremented only once.

Any help will be appreciated as I am just star开发者_如何学Goting out in CUDA and every example I see online never seems to apply to what I need to do.

EDIT: Fixed my naive code. Forgot to type one of the main condition in the code. It was a long long day.

for(int i = 0; i < 128; i++)
{
    for(int j = 0; j < 1024; j++)
    {
        if(modified[i * 1024 + j] < 400.0)  //400.0 threshold value to ignore noise
        {
            den++;  
            diff[i * 1024 + j] = orig[i * 1024 + j] - modified[i * 1024 + j];
            if(diff[i * 1024 + j] < error)
            {
                num++;
            }
        }
    }
}
ratio = num/den;


The operation you need to use to perform global summation across all the threads is known as a "parallel reduction". While you could use atomic operations to do this, I would not recommend it. There is a reduction kernel and a very good paper discussing the technique in the CUDA SDK, it is worth reading.

If I were writing code to do what you want, it would probably look like this:

template <int blocksize>
__global__ void calcRatio(float *orig, float *modified, int size, float *result, 
                            int *count, const float error)
{
    __shared__ volatile float buff[blocksize];

    int index = threadIdx.x + blockIdx.x * blockDim.x;
    int stride = blockDim.x * gridDim.x;

    int count = 0;
    for(int i=index; i<n; i+=stride) {
        val = orig[index] - modified[index];
        count += (val < error);
        result[index] = val;
    }

    buff[threadIdx.x] = count;
    __syncthreads();


    // Parallel reduction in shared memory using 1 warp
    if (threadId.x < warpSize) {

        for(int i=threadIdx.x + warpSize; i<blocksize; i+= warpSize) {
            buff[threadIdx.x] += buff[i];

        if (threadIdx.x < 16) buff[threadIdx.x] +=buff[threadIdx.x + 16];
        if (threadIdx.x < 8)  buff[threadIdx.x] +=buff[threadIdx.x + 8];
        if (threadIdx.x < 4)  buff[threadIdx.x] +=buff[threadIdx.x + 4];
        if (threadIdx.x < 2)  buff[threadIdx.x] +=buff[threadIdx.x + 2];
        if (threadIdx.x == 0) count[blockIdx.x] = buff[0] + buff[1];
    }
}

The first stanza does what your serial code does - computes a difference and a thread local total of elements which are less than error. Note I have written this version so that each thread is designed to process more than one entry of the input data. This has been done to help offset the computational cost of the parallel reduction that follows, and the idea is that you would use fewer blocks and threads than there were input data set entries.

The second stanza is the reduction itself, done in shared memory. It is effectively a "tree like" operation where the size of the set of thread local subtotals within a single block of threads is first summed down to 32 subtotals, then the subtotals are combined until there is the final subtotal for the block, and that is then stored is the total for the block. You will wind up with a small list of sub totals in count, one for each block you launched, which can be copied back to the host and the final result you need calculated there.

Please note I coded this in the browser and haven't compiled it, there might be errors, but it should give an idea about how an "advanced" version of what you are trying to do would work.


The denominator is pretty simple, since it's just the size.

The numerator is more troublesome, since its value for a given thread depends on all previous values. You're going to have to do that operation serially.

The thing you're looking for is probably atomicAdd. It's very slow, though.

I think you'd find this question relevant. Your num is basically global data. CUDA array-to-array sum

Alternatively, you could dump the results of the error check into an array. Counting the results could then be parallelized. It would be a little tricky, but I think something like this would scale up: http://tekpool.wordpress.com/2006/09/25/bit-count-parallel-counting-mit-hakmem/

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜