开发者

Cuda Matrix x Vector

I am trying to make a code that multiplies a vector and a matrix run on the gpu with cuda. My problem seems to be with calculating the stride, as the code works perfectly with 1 block and 1 thread: 6,12,18, in this case. But with 3 threads or more the output becomes: 1,4,9. I think the problem is the code runs less the the necessary number of times, since the stride is as big or bigger then the C and L in those cases. The problem is in this part of the code:

#define L 3 
#define C L

__managed__ int A[C][L], x[C];




__global__ void multiply(int *b)

{   
    int i, j;
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
   开发者_运维知识库  int stride = gridDim.x * blockDim.x;
      printf("\n  stride -> %d  \n", stride);   
     
      for (i = idx; i < L; i+=stride){
      
        for (j = idx; j < C; j+=stride){
               printf("\n  calculating.... at i= %d and j= %d \n", i, j);   
            b[i] = b[i] + A[i][j] * x[j];
  }         
 }
}

The rest of the code runs perfectly well. I tried setting the values in the second loop as (j = 0; j < C; j++) That seems to work, however with this solution every thread is doing the whole work. I cant seem to split the work correctly.


This cannot work correctly for more than 1 thread:

b[i] = b[i] + A[i][j] * x[j];

If you want to write a gemv operation that has multiple threads participating in calculation of the same result element, you will need to learn how to use atomics or parallel reductions. For simplicity here I'll give an atomic example.

Furthermore, you cannot stride in two dimensions the way you are imagining. What you have mapped out causes the threads to proceed in a diagonal fashion across the 2D problem space. It will not touch every element in that space, for more than single thread.

To address this, I'll show an example that limits striding to be along the row.

Finally, you had some incorrect uses of L and C that I have adjusted. This doesn't really matter for the case where L == C.

Here is an example/rough demonstrator:

$ cat t2156.cu
#include <iostream>
#include <cstdlib>

#define L 3
#define C L

__managed__ int A[C][L], x[L];

__global__ void multiply(int *b)

{
    int i, j;
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    int stride = gridDim.x * blockDim.x;
      //printf("\n  stride -> %d  \n", stride);

      for (i = 0; i < C; i++){

        for (j = idx; j < L; j+=stride){
              // printf("\n  calculating.... at i= %d and j= %d \n", i, j);
            atomicAdd(b+i, A[i][j] * x[j]);
        }
      }
}

int main(int argc, char *argv[]){
  int my_threads = 1;
  int my_blocks = 1;
  if (argc > 1) my_blocks = my_threads = atoi(argv[1]);
  int *b;
  cudaMallocManaged(&b, C*sizeof(b[0]));
  for (int i = 0; i < C; i++){
    b[i] = 0;
    for (int j = 0; j < L; j++){
      if (i == 0) x[j] = j+1;
      A[i][j] = i+1;}
    }
  multiply<<<my_blocks, my_threads>>>(b);
  cudaDeviceSynchronize();
  for (int i = 0; i < C; i++)
  std::cout << b[i] << std::endl;
}
$ nvcc -o t2156 t2156.cu
$ compute-sanitizer ./t2156
========= COMPUTE-SANITIZER
6
12
18
========= ERROR SUMMARY: 0 errors
$ compute-sanitizer ./t2156 2
========= COMPUTE-SANITIZER
6
12
18
========= ERROR SUMMARY: 0 errors
$ compute-sanitizer ./t2156 3
========= COMPUTE-SANITIZER
6
12
18
========= ERROR SUMMARY: 0 errors
$ compute-sanitizer ./t2156 100
========= COMPUTE-SANITIZER
6
12
18
========= ERROR SUMMARY: 0 errors
$

It's certainly possible to stride in 2 dimensions. For this completely arbitrary 1D-overlaid-on-2D case I would do it like this. I'm sure there is more than one way to skin the cat:

__global__ void multiply(int *b)

{
    int i;
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    int stride = gridDim.x * blockDim.x;
    for (i = idx; i < C*L; i+=stride){  // 1D striding
      int row = i/L;                    // 2D problem space
      int col = i-(row*L);
      atomicAdd(b+row, A[row][col] * x[col]);
    }
 }
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜