开发者

Parallel reduction and find index on CUDA

I have an array of 20K values and I am reducing it over 50 blocks with 400 threads each. num_blocks = 50 and block_size = 400.

My code looks like this:

getmax <<< num_blocks,block_size >>> (d_in, d_out1, d_indices);

__global__ void getmax(float *in1, float *out1, int *index)
{
    // Declare arrays to be in share开发者_如何转开发d memory.
    __shared__ float max[threads];

    int nTotalThreads = blockDim.x;    // Total number of active threads
    float temp;
    float max_val;
    int max_index;
    int arrayIndex;

    // Calculate which element this thread reads from memory
    arrayIndex = gridDim.x*blockDim.x*blockIdx.y + blockDim.x*blockIdx.x + threadIdx.x;
    max[threadIdx.x] = in1[arrayIndex];
    max_val = max[threadIdx.x];
    max_index = blockDim.x*blockIdx.x + threadIdx.x;
    __syncthreads();

    while(nTotalThreads > 1)
    {
        int halfPoint = (nTotalThreads >> 1);
        if (threadIdx.x < halfPoint) 
        {
            temp = max[threadIdx.x + halfPoint];
            if (temp > max[threadIdx.x]) 
            {
                max[threadIdx.x] = temp;
                max_val = max[threadIdx.x];            
            }
        }
        __syncthreads();

        nTotalThreads = (nTotalThreads >> 1);    // divide by two.
    }

    if (threadIdx.x == 0)
    {
        out1[num_blocks*blockIdx.y + blockIdx.x] = max[threadIdx.x];
    }

    if(max[blockIdx.x] == max_val )
    {
        index[blockIdx.x] = max_index;    
    }
}

The problem/issue here is that at some point “nTotalThreads” is not exactly a power of 2, resulting in garbage value for the index. The array out1 gives me the maximum value in each block, which is correct and validated. But the value of the index is wrong. For example: the max value in the first block occurs at index=40, but the kernel gives the values of index as 15. Similarly the value of the max in the second block is at 440, but the kernel gives 416.

Any suggestions??


It should be easy to ensure that nTotalThreads is always a power of 2.

Make the first reduction a special case that gets the nTotalThreads to a power of 2. eg, since you start with 400 threads in a block, do the first reduction with 256 threads. Threads 0-199 will reduce from two values, and threads 200-255 just won't have to do a reduction in this initial step. From then on out you'd be fine.


Are you sure you really need the 'issue' “nTotalThreads” is not exactly a power of 2? It makes the code less readable and I think it can interfere with the performance too. Anyway if you substitute

nTotalThreads = (nTotalThreads >> 1);

with

nTotalThreads = (nTotalThreads +1 ) >> 1;

it should solve one bug concerning this 'issue'.

Francesco


Second Jeff's suggestion.

Take a look at the CUDA Thrust Library's reduce function. This is demonstrated to have 95+% efficiency compared with heavily hand-tuned kernels and is pretty flexible and easy to use.


check my kernel. You can put your blockresults to array(which can be in global memory) and get the result in global memory

And see how I call it in host code:

sumSeries<<<dim3(blockCount),dim3(threadsPerBlock)>>>(deviceSum,threadsPerBlock*blockCount);
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜