Cuda code #define error, expected a ")"
In the following code, if I bring the #define N 65536 above the #if FSIZE, then I get the following error:
#if FSIZE==1
__global__ void compute_sum1(float *a, float *b, float *c, int N)
{
#define N 65536
int majorIdx = blockIdx.x;
int subIdx = threadIdx.x;
int idx=majorIdx*32+subIdx ;
float sum=0;
int t=4*idx;
if(t<N)
{
c[t]= a[t]+b[t];
c[t+1]= a[t+1]+b[t+1];
c[t+2]= a[t+2]+b[t+2];
c[t+3]= a[t+3]+b[t+3];
}
return;
}
#elif FSIZE==2
__global__ void compute_sum2(float2 *a, float2 *b, float2 *c, int N)
#define N 65536
{
int majorIdx = blockIdx.x;
int subIdx = threadIdx.x;
int idx=majorIdx*32+subIdx ;
float sum=0;
int t=2*idx;
if(t<N)
{
c[t].x= a[t].x+b[t].x;
c[t].y= a[t].y+b[t].y;
c[t+1].x= a[t+1].x+b[t+1].x;
c[t+1].y= a[t+1].y+b[t+1].y;
}
return ;
}
float1vsfloat2.cu(10): error: expected a ")"
This problem is a little annoying and I would really really like to know why 开发者_运维知识库its happening. I have a feeling I'm overlooking something really silly. Btw, this code section is at the top of the file. Not even an #include before it. I will really appreciate any possible explanations.
The preprocessor changes this line:
__global__ void compute_sum1(float *a, float *b, float *c, int N)
to
__global__ void compute_sum1(float *a, float *b, float *c, int 65536)
which isn't valid CUDA code.
精彩评论